MULTICHANNEL BLIND SEPARATION AND. Scott C. Douglas 1, Andrzej Cichocki 2, and Shun-ichi Amari 2
|
|
- Ferdinand York
- 5 years ago
- Views:
Transcription
1 MULTICHANNEL BLIND SEPARATION AND DECONVOLUTION OF SOURCES WITH ARBITRARY DISTRIBUTIONS Scott C. Douglas 1, Andrzej Cichoci, and Shun-ichi Amari 1 Department of Electrical Engineering, University of Utah Salt Lae City, Utah 8411 USA Brain Information Processing Group, Frontier Research Program, RIKEN Wao-shi, Saitama 31-1 JAPAN Abstract{ Blind deconvolution and separation of linearly mixed and convolved sources is an important and challenging tas for numerous applications. While several recently-developed algorithms have shown promise in these tass, these techniques may fail to separate signal mixtures containing both sub- and super-gaussiandistributed sources. In this paper, we present a simple and ecient extension of a family of algorithms that enables the separation and deconvolution of mixtures of arbitrary non-gaussian sources. Our technique monitors the statistics of each of the outputs of the separator using a rigorously-derived sucient criterion for stability and then selects the appropriate nonlinearity for each channel such that local convergence conditions of the algorithm are satised. Extensive simulations show the validity and eciency of our method to blindly extract mixtures of arbitrary-distributed source signals. I. INTRODUCTION Blind signal separation is useful for numerous problems in biomedical signal analysis, acoustics, communications, and signal and image processing. In blind source separation of instantaneous signal mixtures, a set of measured signals fx i ()g 1 i n is assumed to be generated from a set of unnown stochastic independent sources fs i ()g 1 i m, m n as x() = Hs() (1) where x() =[x 1 () x n ()] T, s() =[s 1 () s m ()] T,andH is an (n m)-dimensional matrix of unnown mixing coecients fh ij g. The measured sensor signals are processed by a linear single-layer feed-forward networ as y() = W()x() ()
2 where y() =[y 1 () y m ()] T and W() isan(mn)-dimensional synaptic weight matrix. Ideally, W() is adjusted iteratively such that lim W()H = PD (3)!1 where P is an (m m)-dimensional permutation matrix with a single unity entry in any ofitsrows or columns and D is a diagonal nonsingular matrix. Recently, several simple, ecient, and robust iterative algorithms for adjusting W() have been proposed for the blind signal separation tas [1]{[1]. Such methods use higher-order statistical information about the source signals to iteratively adjust the coecient matrix W(). In this paper, we consider one class of on-line adaptive algorithms given by [1] W( +1) = W()+() I ; f(y())y T () W() (4) where f(y()) = [f 1 (y 1 ()) f m (y m ())] T. The optimal forms of the nonlinear functions ff i (y)g can be shown to be dependent on the statistics of the source signals [, 7, 8]. For example, if the signal mixture consists of sub-gaussian sources with negative urtoses, the choices f i (y) = f N (y) = jyj p sgn(y) for p = f 3 :::g provide adequate separation capabilities. For mixtures of super-gaussian sources with positive urtoses, the choice f i (y) = f P (y) = tanh(y) with > can be used [3, 11, 1]. A related tas to blind signal separation is that of multichannel signal deconvolution, in which x() is assumed to be produced from s() as x() = 1X p=;1 H p s( ; p) () where H p is an (nm)-dimensional matrix of mixing coecients at lag p. The goal is to calculate a vector y() of possibly scaled and/or delayed estimates of the source signals in s() from x() using a causal linear lter given by y() = LX p= W p ()x( ; p) (6) where the (m n)-dimensional matrices fw p ()g, p L contain the coecients of the multichannel lter. One algorithm that can be used in this tas is described in [, 6] and is given by W p ( +1) = W p ()+() W p () ; f(y( ; L))u T ( ; p) (7) where the n-dimensional vector u() is computed as u() = LX q= W T L;q()y( ; q): (8)
3 This algorithm reduces to that in (4) for L =. Although simulations have indicated that these algorithms are successful at separating and deconvolving linearly-mixed signals, they require nowledge about the statistics of the source signals to function properly. In particular, it must be nown a priori whether the source signals are sub-gaussian or super- Gaussian so that the nonlinearities f i (y) can be properly chosen. Even worse, if the measured signals x i () contain mixtures of both sub-gaussian (e.g. digital data) and super-gaussian (e.g. speech) sources, then these algorithms may fail to separate these signals reliably. In this paper, we propose modications to the algorithms in (4) and (7) that enable sources from arbitrary non-gaussian distributions to be extracted from measurements of the mixed signals. Our methods use simple sucient conditions for algorithm stability that are based on the necessary stability conditions originally derived by Amari et al [3]. Our computationally-simple algorithms employ time-varying nonlinearities in the coecient updates that are selected from a family of xed nonlinearities at each iteration to best satisfy our sucient stability conditions. Simulations show the excellent and robust convergence behavior of the proposed methods in separating mixtures of sub- and super-gaussian sources. II. CRITERIA FOR ALGORITHM STABILITY The modied algorithms for separation of sources with arbitrary distributions are based on the stability analysis of (4) that is described in [3]. For brevity and simplicity of discussion, we only consider (4) and outline the necessary extensions of the analysis that are needed to develop our modied algorithms, although we later apply the results to the multichannel deconvolution method in (7). The algorithm in (4) can be derived as an iterative stochastic minimization procedure for the cost function (W()) = ; 1 log(det(w()w()t )) ; i=1 Eflog p i (y i ())g (9) where Efg denotes statistical expectation and ;d log p i (y)=dy = f i (y). If p i (y) is the actual probability distribution of the source extracted at the ith output, then (W()) represents the negative of the maximum lielihood cost function [, 7]. The procedure in (4) represents the natural gradient method for minimizing (9) iteratively using signal measurements. For details on the general form of the natural gradient search method, the reader is referred to [1]. In [3], the stability of (4) is analyzed by studying the expected value of the Hessian of the cost function, denoted as Efd (W())=dw ij ()dw pq ()g,
4 in the vicinity of a source separation solution. Here, w ij () isthe(i j)th element ofw(), which in[3] is assumed to be a square matrix (m = n). In what follows, we remove this restriction. In analogy with the results of [3], it is simpler to consider the form of d (W()) in terms of the modied coecient dierential such that dx() = dw()w T ()(W()W T ()) ;1 (1) dx()y() = dw()x(): (11) Note that the natural gradient method automatically performs its search in the coecient space spanned by dx(), so that the coecient updates remain in the original column space of W() for all. We can then represent the dierential d (W()) in terms of the elements of dx() as d (W()) = y T ()dx T ()F d (y())dx()y() + f T (y())dx()dx()y() (1) where F d (y()) is a diagonal matrix whose (i i)th entry is f i (y i()). As is shown in [3], the expectations of the terms on the RHS of (1) are Efy T ()dx T ()F d(y())dx()y()g = Eff T (y())dx()dx()y()g = i=1 j=1 j6=i + i=1 i=1 j=1 where i (), i (), j(), and i () are dened as i () j ()[dx ji ()] i ()[dx ii ()] (13) i ()dx ij ()dx ji () (14) i () =Efyi ()fi(y i ())g i () =Efyi ()g (1) j () =Effj(y j ())g and i () =Efy i ()f(y i ())g (16) respectively, and where it has been assumed that y i () andy j () are independent for i 6= j. Thus, the expected value of the Hessian is Efd (W())g = i=1 j=1 j6=i + i=1 i j()[dx ji ()] + i ()dx ij ()dx ji () [ i ()+ i ()][dx ii ()] : (17)
5 For stability, Efd (W())g must be positive for all possible values of dx ij (). By examining the RHS of (17), one can obtain the following necessary and sucient stability conditions on f i (y) for all 1 i<j m: i () > (18) i ()+ i () > (19) i () i () j () j () > 1 4 [ i()+ j ()] : () The conditions in (18) and (19) are satised in practice for any odd nondecreasing function f i (y) =;f i (;y). However, the condition in () is dicult to calculate in practice, as it involves m(m ; 1)= dierent combinations for 1 i<j m. For this reason, we consider a sucient stability criterion of the form i () i () j () j () > () i () j () (1) where (), () > satises for all 1 i<j m the inequality () i () j () 1 4 [ i()+ j ()] : () After some algebra, we nd that the smallest value of () satisfying () is () = 1 s + max() min () + min() max () (3) where max () and min () are the maximum and minimum values of i () for 1 i m, respectively. For this value of (), we can guarantee the stability of the algorithm in (4) if, for all 1 i m, i () i() ; () i () > : (4) Note that all values of i () converge to one as the coecients converge to a separating solution due to the normalizing condition Eff(y())y T ()g = I, such that () 1 near convergence. III. THE ALGORITHM MODIFICATION We now describe the modied algorithms that are based on the stability criteria of the last section. It is nown that the algorithm in (4) can determine a separating solution for W() if a set of nonlinearities ff i (y)g, 1 i m can be properly chosen. For this reason, our modied algorithms employ a time-varying vector nonlinearity f (y()) = [f 1 (y 1 ()) f m (y m ()] T,
6 where f i (y) ischosen to be one of two nonlinearities f N (y) andf P (y) thatare optimized for sub- and super-gaussian source separation tass, respectively. To select f i (y) at time, we form the time-averaged estimates i () = (1 ; ) i ( ; 1) + jy i ()j () ir () = (1 ; ) ir ( ; 1) + f r (y i()) (6) ir () = (1 ; ) ir ( ; 1) + y i ()f r (y i ()) (7) for r = fn Pg and 1 i m, where is a small positive parameter. Then, f i (y) at time is selected as f i(y) = f N(y) f P (y) if i () in () ; () in () > i () ip () ; () ip () if i () in () ; () in () <i () ip () ; () ip () where () is computed at infrequent intervals from past estimates of i (). With these choices, the resulting vector f (y()) is used in place of f(y()) to adjust the coecient matrix in (4). In simulations, it was found that the value of () did not vary signicantly over time, and in fact, setting () equal to one in (8) for all appears to provide convergence of the algorithms to a separating solution. It can be seen that as the coecients of the system converge, the quantity i () ir() ; ir () becomes a reliable estimate of the left-hand-side of the inequality in (4) for f i (y) =f r (y). Extensive simulations have shown that, so long as a set of nonlinearity assignments exists such that the stability conditions in (18){() are satised for one ordering of the extracted sources at the outputs, then (8) properly selects each f i (y) over time to enable the system to reliably extract all source signals regardless of their distribution. IV. SIMULATIONS We now show the capabilities of our modied source separation algorithms via simulation. In our rst example, we employ the signal separation method in (4) to separate ten instantaneously-mixed signals. In this case, the three signal sets fs 1 () s () s 3 () s 4 ()g, fs () s 6 () s 7 ()g, and fs 8 () s 9 () s 1 ()g are i.i.d. with Laplacian, uniform-[;1 1], and binaryf1g distributions, respectively, where the Laplacian p.d.f. is given by p s (s) = :e ;jsj. Since the rst and latter two distributions are super- and sub- Gaussian, respectively, the algorithms in [4, 8] cannot linearly separate these sources from an arbitrary mixture of them. We generate x() as in (1), where the entries of H are drawn from a uniform-[ 1] distribution. The values of h ij to four decimal places are shown in Table 1. As is clear from the table, H exhibits no particular structure, and thus the extraction of the ten sources from the measured signals x() is a challenging tas. (8)
7 Table 1: The entries of H for the ten source separation example. (i j) We apply the algorithm in (4), where f(y()) = f (y()) is adapted according to our method described in (){(8) with () = :, = :1, W() = I, and f N (y) =jyj 3 sgn(y) and f P (y) = tanh(1y) (9) corresponding to nonlinearities for separating sub- and super-gaussian-distributed sources, respectively. Within each on-line nonlinearity selection procedure, we set () to one for all. From the outputs of the separation system, we compute the error vector e() =[e 1 () e 1 ()] T as e() = s() ; D ;1 P T y() (3) where approximate versions of the permutation matrix P and scaling matrix D as introduced in (3) are obtained from W() and H at iteration = 1. Figure 1 shows these ten error signals. Since each error signal decreases to a small value after a sucient number of iterations, all of the sources are reliably extracted using our modied algorithm. Figure plots the performance factor () dened as () = jjp T W()Hjj E jj[p T W()H] d jj ; 1 (31) E where jjjj E denotes the matrix Euclidean norm and where [Q] d is a diagonal matrix whose (i i)th entry is q ii. As can be seen, the value of () decreases to approximately :168 in steady-state, indicating that the system has adequately separated the ten sources. Moreover, a careful examination of the nonlinearities chosen for each extracted output indicate that the appropriate stabilizing nonlinearity f N (y) orf P (y) was eventually selected for each output signal. We now combine the algorithm modication in (){(8) with the blind deconvolution and source separation technique in (7) and apply the resulting system to a three-source separation problem. In this case, the three sources are chosen to be i.i.d. Laplacian-, uniform-, and binary-distributed, respectively, and the convolutive mixture model is given by x() = A 1 x( ; 1) + A x( ; ) + B s()+b 1 s( ; 1) (3)
8 e_1() e_3() e_() e_7() e_9() e_() e_4() e_6() e_8() e_1() Figure 1: The ten error signals for the rst source separation example. where the (3 3) matrices A 1, A, B, and B 1 are given by A 1 = 4 B = ;:1 :3 ;:1 ;: : ;:3 ;: ;: :4 4 : :7 : :9 :8 :7 :4 3 A = 4 3 and B1 = 4 :4 : :3 :8 :4 :3 :6 :6 :1 :4 : :4 :7 :7 :1 :6 3 (33) 3 : (34) Figures 3(a)-(c) and (d)-(f) show the vector sequences s() and x() in this case. The deconvolution system with time-varying nonlinearities was applied to these signals, in which L =6,() =:, =:1, W p () = I(p ; 3), and () was set to one for all within the nonlinearity selection procedures. Shown in 3(g)-(i) are the error signals e 1 () =s 1 () ; y 3 ()=d 33, e () =s () ; y ()=d, and e 3 () =s 3 () ; y 1 ()=d 11, where d 11, d, and d 33 are appropriate scaling factors. As can be seen, the errors decrease to small values for each extracted output, and the signal-to-noise ratios for the three extracted outputs were empirically found to be 1:, 7:4, and 1: db, respectively. Figure 4 shows the actual value of () ; 1 on a logarithmic scale for the second source separation example. Starting from an initial value of (1) = :94, the value of () gradually approaches unity over time. These results, combined with the successful separation capabilities of the modied systems, indicate that setting () to one within the nonlinearity selection procedures does not limit the overall capabilities of the systems.
9 1 1 1 psi() Figure : Performance factor () for the rst source separation example. 1 (a) (b) (c) s_1() s_() s_3() -1 1 (d) x (e) x (f) x 1 4 x_1() x_() x_3() - 1 (g) x (h) x (i) x 1 4 e_1() e_() e_3() -1 1 x x x 1 4 Figure 3: The three source signals ((a)-(c)), the three mixed signals ((d)- (f)), and the three error signals ((g)-(i)) for the convolutive-mixture source separation example gamma() x 1 4 Figure 4: Evolution of () ; 1 for the convolutive-mixture source separation example.
10 V. CONCLUSIONS In conclusion, we have described techniques for selecting the nonlinearities within blind source separation and deconvolution algorithms to enable the separation of sources with arbitrary distributions. The proposed methods can be easily implemented in an on-line setting. Simulations applying the techniques to instantaneous mixture separation and to multichannel deconvolution and source separation indicate the ability of the methods to accurately separate signal mixtures containing both sub- and super-gaussian sources. REFERENCES [1] S. Amari, \Natural gradient wors eciently in learning," submitted to Neural Computation. [] S. Amari and J.-F. Cardoso, \Blind source separation semi-parametric statistical approach," to appear in IEEE Trans. Signal Processing. [3] S. Amari, T.-P. Chen, and A. Cichoci, \Stability analysis of adaptive blind source separation," to appear in Neural Networs. [4] S. Amari, A. Cichoci, and H.H. Yang, \A new learning algorithm for blind signal separation," Adv. Neural Inform. Proc. Sys. 8 (Cambridge, MA: MIT Press, 1996), pp [] S. Amari, S.C. Douglas, A. Cichoci, and H.H. Yang, \Multichannel blind deconvolution using the natural gradient," Proc. IEEE Worshop Signal Proc. Adv. Wireless Comm. (Piscataway, NJ: IEEE, 1997), pp [6] S. Amari, S.C. Douglas, A. Cichoci, and H.H. Yang, \Novel on-line adaptive learning algorithms for blind deconvolution using the natural gradient approach," presented at 11th IFAC Symp. Syst. Ident., Kitayushu, Japan, July [7] A.J. Bell and T.J. Sejnowsi, \An information maximization approach to blind separation and blind deconvolution," Neural Computation, vol. 7, no. 6, pp , Nov [8] J.-F. Cardoso and B. Laheld, \Equivariant adaptive source separation," IEEE Trans. Signal Processing, vol. 44, pp , Dec [9] A. Cichoci, R. Unbehauen, and E. Rummert, \Robust learning algorithm for blind separation of signals," Electron. Lett., vol. 3, no. 17, pp , Aug [1] A. Cichoci, S. Amari, M. Adachi, and W. Kasprza, \Self-adaptive neural networs for blind separation of sources," Proc. IEEE Int. Symp. Circ. Syst., vol. (New Yor: IEEE, 1996), pp [11] P. Comon, C. Jutten, and J. Herault, \Blind separation of sources, II: Problem statement," Signal Processing, vol. 4, no. 1, pp. 11-, July [1] C. Jutten and J. Herault, \Blind separation of sources, I: An adaptive algorithm based on neuromimetic architecture," Signal Processing, vol. 4, no. 1, pp. 1-1, July 1991.
ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM. Brain Science Institute, RIKEN, Wako-shi, Saitama , Japan
ON SOME EXTENSIONS OF THE NATURAL GRADIENT ALGORITHM Pando Georgiev a, Andrzej Cichocki b and Shun-ichi Amari c Brain Science Institute, RIKEN, Wako-shi, Saitama 351-01, Japan a On leave from the Sofia
More information1 Introduction Independent component analysis (ICA) [10] is a statistical technique whose main applications are blind source separation, blind deconvo
The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis Aapo Hyvarinen Helsinki University of Technology Laboratory of Computer and Information Science P.O.Box 5400,
More informationNatural Gradient Learning for Over- and Under-Complete Bases in ICA
NOTE Communicated by Jean-François Cardoso Natural Gradient Learning for Over- and Under-Complete Bases in ICA Shun-ichi Amari RIKEN Brain Science Institute, Wako-shi, Hirosawa, Saitama 351-01, Japan Independent
More informationTo appear in Proceedings of the ICA'99, Aussois, France, A 2 R mn is an unknown mixture matrix of full rank, v(t) is the vector of noises. The
To appear in Proceedings of the ICA'99, Aussois, France, 1999 1 NATURAL GRADIENT APPROACH TO BLIND SEPARATION OF OVER- AND UNDER-COMPLETE MIXTURES L.-Q. Zhang, S. Amari and A. Cichocki Brain-style Information
More informationON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS
Yugoslav Journal of Operations Research 5 (25), Number, 79-95 ON-LINE BLIND SEPARATION OF NON-STATIONARY SIGNALS Slavica TODOROVIĆ-ZARKULA EI Professional Electronics, Niš, bssmtod@eunet.yu Branimir TODOROVIĆ,
More information1 Introduction Consider the following: given a cost function J (w) for the parameter vector w = [w1 w2 w n ] T, maximize J (w) (1) such that jjwjj = C
On Gradient Adaptation With Unit-Norm Constraints Scott C. Douglas 1, Shun-ichi Amari 2, and S.-Y. Kung 3 1 Department of Electrical Engineering, Southern Methodist University Dallas, Texas 75275 USA 2
More informationA Canonical Genetic Algorithm for Blind Inversion of Linear Channels
A Canonical Genetic Algorithm for Blind Inversion of Linear Channels Fernando Rojas, Jordi Solé-Casals, Enric Monte-Moreno 3, Carlos G. Puntonet and Alberto Prieto Computer Architecture and Technology
More informationx x2 H11 H y y2 H22 H
.5.5 5 5.5.5 5 5 2 3 4 5 6 7 H 2 3 4 5 6 7 H2 5 5 x 5 5 x2.5.5.5.5.5.5.5.5 2 3 4 5 6 7 H2 2 3 4 5 6 7 H22.5.5 y.5.5 y2 Figure : Mixing coef.: H [k], H 2 [k], H 2 [k], H 22 [k]. Figure 3: Observations and
More informationx 1 (t) Spectrogram t s
A METHOD OF ICA IN TIME-FREQUENCY DOMAIN Shiro Ikeda PRESTO, JST Hirosawa 2-, Wako, 35-98, Japan Shiro.Ikeda@brain.riken.go.jp Noboru Murata RIKEN BSI Hirosawa 2-, Wako, 35-98, Japan Noboru.Murata@brain.riken.go.jp
More informationPerformance Comparison of Two Implementations of the Leaky. LMS Adaptive Filter. Scott C. Douglas. University of Utah. Salt Lake City, Utah 84112
Performance Comparison of Two Implementations of the Leaky LMS Adaptive Filter Scott C. Douglas Department of Electrical Engineering University of Utah Salt Lake City, Utah 8411 Abstract{ The leaky LMS
More informationSPARSE REPRESENTATION AND BLIND DECONVOLUTION OF DYNAMICAL SYSTEMS. Liqing Zhang and Andrzej Cichocki
SPARSE REPRESENTATON AND BLND DECONVOLUTON OF DYNAMCAL SYSTEMS Liqing Zhang and Andrzej Cichocki Lab for Advanced Brain Signal Processing RKEN Brain Science nstitute Wako shi, Saitama, 351-198, apan zha,cia
More informationMassoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39
Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction
More informationBlind Source Separation with a Time-Varying Mixing Matrix
Blind Source Separation with a Time-Varying Mixing Matrix Marcus R DeYoung and Brian L Evans Department of Electrical and Computer Engineering The University of Texas at Austin 1 University Station, Austin,
More information= w 2. w 1. B j. A j. C + j1j2
Local Minima and Plateaus in Multilayer Neural Networks Kenji Fukumizu and Shun-ichi Amari Brain Science Institute, RIKEN Hirosawa 2-, Wako, Saitama 35-098, Japan E-mail: ffuku, amarig@brain.riken.go.jp
More informationBlind channel deconvolution of real world signals using source separation techniques
Blind channel deconvolution of real world signals using source separation techniques Jordi Solé-Casals 1, Enric Monte-Moreno 2 1 Signal Processing Group, University of Vic, Sagrada Família 7, 08500, Vic
More informationwhere A 2 IR m n is the mixing matrix, s(t) is the n-dimensional source vector (n» m), and v(t) is additive white noise that is statistically independ
BLIND SEPARATION OF NONSTATIONARY AND TEMPORALLY CORRELATED SOURCES FROM NOISY MIXTURES Seungjin CHOI x and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University, KOREA
More informationFREQUENCY-DOMAIN IMPLEMENTATION OF BLOCK ADAPTIVE FILTERS FOR ICA-BASED MULTICHANNEL BLIND DECONVOLUTION
FREQUENCY-DOMAIN IMPLEMENTATION OF BLOCK ADAPTIVE FILTERS FOR ICA-BASED MULTICHANNEL BLIND DECONVOLUTION Kyungmin Na, Sang-Chul Kang, Kyung Jin Lee, and Soo-Ik Chae School of Electrical Engineering Seoul
More information1 Introduction The Separation of Independent Sources (SIS) assumes that some unknown but independent temporal signals propagate through a mixing and/o
Appeared in IEEE Trans. on Circuits and Systems, vol. 42, no. 11, pp. 748-751, November 95 c Implementation and Test Results of a Chip for The Separation of Mixed Signals Ammar B. A. Gharbi and Fathi M.
More informationOne-unit Learning Rules for Independent Component Analysis
One-unit Learning Rules for Independent Component Analysis Aapo Hyvarinen and Erkki Oja Helsinki University of Technology Laboratory of Computer and Information Science Rakentajanaukio 2 C, FIN-02150 Espoo,
More informationNonholonomic Orthogonal Learning Algorithms for Blind Source Separation
LETTER Communicated by Klaus Obermayer Nonholonomic Orthogonal Learning Algorithms for Blind Source Separation Shun-ichi Amari Tian-Ping Chen Andrzej Cichocki RIKEN Brain Science Institute, Brain-Style
More information1 Introduction Blind source separation (BSS) is a fundamental problem which is encountered in a variety of signal processing problems where multiple s
Blind Separation of Nonstationary Sources in Noisy Mixtures Seungjin CHOI x1 and Andrzej CICHOCKI y x Department of Electrical Engineering Chungbuk National University 48 Kaeshin-dong, Cheongju Chungbuk
More informationICA ALGORITHM BASED ON SUBSPACE APPROACH. Ali MANSOUR, Member of IEEE and N. Ohnishi,Member of IEEE. Bio-Mimetic Control Research Center (RIKEN),
ICA ALGORITHM BASED ON SUBSPACE APPROACH Ali MANSOUR, Member of IEEE and N Ohnishi,Member of IEEE Bio-Mimetic Control Research Center (RIKEN), 7-, Anagahora, Shimoshidami, Moriyama-ku, Nagoya (JAPAN) mail:mansournagoyarikengojp,ohnishisheratonohnishinuienagoya-uacjp
More informationBlind Extraction of Singularly Mixed Source Signals
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL 11, NO 6, NOVEMBER 2000 1413 Blind Extraction of Singularly Mixed Source Signals Yuanqing Li, Jun Wang, Senior Member, IEEE, and Jacek M Zurada, Fellow, IEEE Abstract
More information/97/$10.00 (c) 1997 AACC
Optimal Random Perturbations for Stochastic Approximation using a Simultaneous Perturbation Gradient Approximation 1 PAYMAN SADEGH, and JAMES C. SPALL y y Dept. of Mathematical Modeling, Technical University
More informationBLIND SEPARATION OF LINEAR CONVOLUTIVE MIXTURES USING ORTHOGONAL FILTER BANKS. Milutin Stanacevic, Marc Cohen and Gert Cauwenberghs
, BLIND SEPARATION OF LINEAR CONVOLUTIVE MIXTURES USING ORTHOGONAL FILTER BANKS Milutin Stanacevic, Marc Cohen and Gert Cauwenberghs Department of Electrical and Computer Engineering and Center for Language
More informationA METHOD OF ICA IN TIME-FREQUENCY DOMAIN
A METHOD OF ICA IN TIME-FREQUENCY DOMAIN Shiro Ikeda PRESTO, JST Hirosawa 2-, Wako, 35-98, Japan Shiro.Ikeda@brain.riken.go.jp Noboru Murata RIKEN BSI Hirosawa 2-, Wako, 35-98, Japan Noboru.Murata@brain.riken.go.jp
More informationBLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING FUNCTION. Juha Karvanen and Visa Koivunen
BLIND SEPARATION USING ABSOLUTE MOMENTS BASED ADAPTIVE ESTIMATING UNCTION Juha Karvanen and Visa Koivunen Signal Processing Laboratory Helsinki University of Technology P.O. Box 3, IN-215 HUT, inland Tel.
More informationPROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE. Noboru Murata
' / PROPERTIES OF THE EMPIRICAL CHARACTERISTIC FUNCTION AND ITS APPLICATION TO TESTING FOR INDEPENDENCE Noboru Murata Waseda University Department of Electrical Electronics and Computer Engineering 3--
More informationOn Information Maximization and Blind Signal Deconvolution
On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate
More informationSimultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation
Simultaneous Diagonalization in the Frequency Domain (SDIF) for Source Separation Hsiao-Chun Wu and Jose C. Principe Computational Neuro-Engineering Laboratory Department of Electrical and Computer Engineering
More informationFundamentals of Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Independent Vector Analysis (IVA)
Fundamentals of Principal Component Analysis (PCA),, and Independent Vector Analysis (IVA) Dr Mohsen Naqvi Lecturer in Signal and Information Processing, School of Electrical and Electronic Engineering,
More informationConvolutive Blind Source Separation based on Multiple Decorrelation. Lucas Parra, Clay Spence, Bert De Vries Sarno Corporation, CN-5300, Princeton, NJ
Convolutive Blind Source Separation based on Multiple Decorrelation. Lucas Parra, Clay Spence, Bert De Vries Sarno Corporation, CN-5300, Princeton, NJ 08543 lparra j cspence j bdevries @ sarno.com Abstract
More informationNeural Learning in Structured Parameter Spaces Natural Riemannian Gradient
Neural Learning in Structured Parameter Spaces Natural Riemannian Gradient Shun-ichi Amari RIKEN Frontier Research Program, RIKEN, Hirosawa 2-1, Wako-shi 351-01, Japan amari@zoo.riken.go.jp Abstract The
More informationApplications and fundamental results on random Vandermon
Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)
More informationNon-Euclidean Independent Component Analysis and Oja's Learning
Non-Euclidean Independent Component Analysis and Oja's Learning M. Lange 1, M. Biehl 2, and T. Villmann 1 1- University of Appl. Sciences Mittweida - Dept. of Mathematics Mittweida, Saxonia - Germany 2-
More informationAnalytical solution of the blind source separation problem using derivatives
Analytical solution of the blind source separation problem using derivatives Sebastien Lagrange 1,2, Luc Jaulin 2, Vincent Vigneron 1, and Christian Jutten 1 1 Laboratoire Images et Signaux, Institut National
More informationORIENTED PCA AND BLIND SIGNAL SEPARATION
ORIENTED PCA AND BLIND SIGNAL SEPARATION K. I. Diamantaras Department of Informatics TEI of Thessaloniki Sindos 54101, Greece kdiamant@it.teithe.gr Th. Papadimitriou Department of Int. Economic Relat.
More informationy 2 a 12 a 1n a 11 s 2 w 11 f n a 21 s n f n y 1 a n1 x n x 2 x 1
Maximum Likelihood Blind Source Separation: A Context-Sensitive Generalization of ICA Barak A. Pearlmutter Computer Science Dept, FEC 33 University of Ne Mexico Albuquerque, NM 873 bap@cs.unm.edu Lucas
More informationDictionary Learning for L1-Exact Sparse Coding
Dictionary Learning for L1-Exact Sparse Coding Mar D. Plumbley Department of Electronic Engineering, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom. Email: mar.plumbley@elec.qmul.ac.u
More informationConstrained Projection Approximation Algorithms for Principal Component Analysis
Constrained Projection Approximation Algorithms for Principal Component Analysis Seungjin Choi, Jong-Hoon Ahn, Andrzej Cichocki Department of Computer Science, Pohang University of Science and Technology,
More informationMINIMIZATION-PROJECTION (MP) APPROACH FOR BLIND SOURCE SEPARATION IN DIFFERENT MIXING MODELS
MINIMIZATION-PROJECTION (MP) APPROACH FOR BLIND SOURCE SEPARATION IN DIFFERENT MIXING MODELS Massoud Babaie-Zadeh ;2, Christian Jutten, Kambiz Nayebi 2 Institut National Polytechnique de Grenoble (INPG),
More informationUndercomplete Independent Component. Analysis for Signal Separation and. Dimension Reduction. Category: Algorithms and Architectures.
Undercomplete Independent Component Analysis for Signal Separation and Dimension Reduction John Porrill and James V Stone Psychology Department, Sheeld University, Sheeld, S10 2UR, England. Tel: 0114 222
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationa 22(t) a nn(t) Correlation τ=0 τ=1 τ=2 (t) x 1 (t) x Time(sec)
A METHOD OF BLIND SEPARATION ON TEMPORAL STRUCTURE OF SIGNALS Shiro Ikeda and Noboru Murata Email:fShiro.Ikeda,Noboru.Muratag@brain.riken.go.jp RIKEN Brain Science Institute Hirosawa -, Wako, Saitama 3-98,
More informationSemi-Blind approaches to source separation: introduction to the special session
Semi-Blind approaches to source separation: introduction to the special session Massoud BABAIE-ZADEH 1 Christian JUTTEN 2 1- Sharif University of Technology, Tehran, IRAN 2- Laboratory of Images and Signals
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationSYSTEM RECONSTRUCTION FROM SELECTED HOS REGIONS. Haralambos Pozidis and Athina P. Petropulu. Drexel University, Philadelphia, PA 19104
SYSTEM RECOSTRUCTIO FROM SELECTED HOS REGIOS Haralambos Pozidis and Athina P. Petropulu Electrical and Computer Engineering Department Drexel University, Philadelphia, PA 94 Tel. (25) 895-2358 Fax. (25)
More informationIndependent Component Analysis (ICA)
Independent Component Analysis (ICA) Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationRemaining energy on log scale Number of linear PCA components
NONLINEAR INDEPENDENT COMPONENT ANALYSIS USING ENSEMBLE LEARNING: EXPERIMENTS AND DISCUSSION Harri Lappalainen, Xavier Giannakopoulos, Antti Honkela, and Juha Karhunen Helsinki University of Technology,
More informationIndependent Component Analysis on the Basis of Helmholtz Machine
Independent Component Analysis on the Basis of Helmholtz Machine Masashi OHATA *1 ohatama@bmc.riken.go.jp Toshiharu MUKAI *1 tosh@bmc.riken.go.jp Kiyotoshi MATSUOKA *2 matsuoka@brain.kyutech.ac.jp *1 Biologically
More informationComparison of DDE and ETDGE for. Time-Varying Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong
Comparison of DDE and ETDGE for Time-Varying Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk
More informationChapter 6 Effective Blind Source Separation Based on the Adam Algorithm
Chapter 6 Effective Blind Source Separation Based on the Adam Algorithm Michele Scarpiniti, Simone Scardapane, Danilo Comminiello, Raffaele Parisi and Aurelio Uncini Abstract In this paper, we derive a
More informationREAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION. Scott Rickard, Radu Balan, Justinian Rosca. Siemens Corporate Research Princeton, NJ 08540
REAL-TIME TIME-FREQUENCY BASED BLIND SOURCE SEPARATION Scott Rickard, Radu Balan, Justinian Rosca Siemens Corporate Research Princeton, NJ 84 fscott.rickard,radu.balan,justinian.roscag@scr.siemens.com
More informationTRINICON: A Versatile Framework for Multichannel Blind Signal Processing
TRINICON: A Versatile Framework for Multichannel Blind Signal Processing Herbert Buchner, Robert Aichner, Walter Kellermann {buchner,aichner,wk}@lnt.de Telecommunications Laboratory University of Erlangen-Nuremberg
More informationBlind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation
Blind Spectral-GMM Estimation for Underdetermined Instantaneous Audio Source Separation Simon Arberet 1, Alexey Ozerov 2, Rémi Gribonval 1, and Frédéric Bimbot 1 1 METISS Group, IRISA-INRIA Campus de Beaulieu,
More informationBLIND DECONVOLUTION ALGORITHMS FOR MIMO-FIR SYSTEMS DRIVEN BY FOURTH-ORDER COLORED SIGNALS
BLIND DECONVOLUTION ALGORITHMS FOR MIMO-FIR SYSTEMS DRIVEN BY FOURTH-ORDER COLORED SIGNALS M. Kawamoto 1,2, Y. Inouye 1, A. Mansour 2, and R.-W. Liu 3 1. Department of Electronic and Control Systems Engineering,
More information10-701/15-781, Machine Learning: Homework 4
10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one
More informationEstimation of linear non-gaussian acyclic models for latent factors
Estimation of linear non-gaussian acyclic models for latent factors Shohei Shimizu a Patrik O. Hoyer b Aapo Hyvärinen b,c a The Institute of Scientific and Industrial Research, Osaka University Mihogaoka
More informationFASTGEO - A HISTOGRAM BASED APPROACH TO LINEAR GEOMETRIC ICA. Andreas Jung, Fabian J. Theis, Carlos G. Puntonet, Elmar W. Lang
FASTGEO - A HISTOGRAM BASED APPROACH TO LINEAR GEOMETRIC ICA Andreas Jung, Fabian J Theis, Carlos G Puntonet, Elmar W Lang Institute for Theoretical Physics, University of Regensburg Institute of Biophysics,
More informationA Fast Algorithm for. Nonstationary Delay Estimation. H. C. So. Department of Electronic Engineering, City University of Hong Kong
A Fast Algorithm for Nonstationary Delay Estimation H. C. So Department of Electronic Engineering, City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong Email : hcso@ee.cityu.edu.hk June 19,
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationAverage Reward Parameters
Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend
More informationBoxlets: a Fast Convolution Algorithm for. Signal Processing and Neural Networks. Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun
Boxlets: a Fast Convolution Algorithm for Signal Processing and Neural Networks Patrice Y. Simard, Leon Bottou, Patrick Haner and Yann LeCun AT&T Labs-Research 100 Schultz Drive, Red Bank, NJ 07701-7033
More informationLearning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract
Published in: Advances in Neural Information Processing Systems 8, D S Touretzky, M C Mozer, and M E Hasselmo (eds.), MIT Press, Cambridge, MA, pages 190-196, 1996. Learning with Ensembles: How over-tting
More informationAn Iterative Blind Source Separation Method for Convolutive Mixtures of Images
An Iterative Blind Source Separation Method for Convolutive Mixtures of Images Marc Castella and Jean-Christophe Pesquet Université de Marne-la-Vallée / UMR-CNRS 8049 5 bd Descartes, Champs-sur-Marne 77454
More informationDetermining the Optimal Decision Delay Parameter for a Linear Equalizer
International Journal of Automation and Computing 1 (2005) 20-24 Determining the Optimal Decision Delay Parameter for a Linear Equalizer Eng Siong Chng School of Computer Engineering, Nanyang Technological
More informationIntroduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be
Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show
More informationEfficient Use Of Sparse Adaptive Filters
Efficient Use Of Sparse Adaptive Filters Andy W.H. Khong and Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College ondon Email: {andy.khong, p.naylor}@imperial.ac.uk Abstract
More informationMultigrid Techniques for Nonlinear Eigenvalue Problems; Solutions of a. Sorin Costiner and Shlomo Ta'asan
Multigrid Techniques for Nonlinear Eigenvalue Problems; Solutions of a Nonlinear Schrodinger Eigenvalue Problem in 2D and 3D Sorin Costiner and Shlomo Ta'asan Department of Applied Mathematics and Computer
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,
More informationA NATURAL GRADIENT CONVOLUTIVE BLIND SOURCE SEPARATION ALGORITHM FOR SPEECH MIXTURES
A NATURAL GRADIENT CONVOLUTIVE BLIND SOURCE SEPARATION ALGORITHM FOR SPEECH MIXTURES Xiaoan Sun and Scott C. Douglas Department of Electrical Engineering Southern Methodist University Dallas, Texas 75275
More informationFilter Banks with Variable System Delay. Georgia Institute of Technology. Atlanta, GA Abstract
A General Formulation for Modulated Perfect Reconstruction Filter Banks with Variable System Delay Gerald Schuller and Mark J T Smith Digital Signal Processing Laboratory School of Electrical Engineering
More informationGatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II
Gatsby Theoretical Neuroscience Lectures: Non-Gaussian statistics and natural images Parts I-II Gatsby Unit University College London 27 Feb 2017 Outline Part I: Theory of ICA Definition and difference
More informationICA [6] ICA) [7, 8] ICA ICA ICA [9, 10] J-F. Cardoso. [13] Matlab ICA. Comon[3], Amari & Cardoso[4] ICA ICA
16 1 (Independent Component Analysis: ICA) 198 9 ICA ICA ICA 1 ICA 198 Jutten Herault Comon[3], Amari & Cardoso[4] ICA Comon (PCA) projection persuit projection persuit ICA ICA ICA 1 [1] [] ICA ICA EEG
More informationOptimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization
5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The
More informationHOPFIELD neural networks (HNNs) are a class of nonlinear
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,
More informationAn Improved Cumulant Based Method for Independent Component Analysis
An Improved Cumulant Based Method for Independent Component Analysis Tobias Blaschke and Laurenz Wiskott Institute for Theoretical Biology Humboldt University Berlin Invalidenstraße 43 D - 0 5 Berlin Germany
More informationTaylor series based nite dierence approximations of higher-degree derivatives
Journal of Computational and Applied Mathematics 54 (3) 5 4 www.elsevier.com/locate/cam Taylor series based nite dierence approximations of higher-degree derivatives Ishtiaq Rasool Khan a;b;, Ryoji Ohba
More informationCONVOLUTIVE NON-NEGATIVE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT
CONOLUTIE NON-NEGATIE MATRIX FACTORISATION WITH SPARSENESS CONSTRAINT Paul D. O Grady Barak A. Pearlmutter Hamilton Institute National University of Ireland, Maynooth Co. Kildare, Ireland. ABSTRACT Discovering
More informationBlind Machine Separation Te-Won Lee
Blind Machine Separation Te-Won Lee University of California, San Diego Institute for Neural Computation Blind Machine Separation Problem we want to solve: Single microphone blind source separation & deconvolution
More informationOn the INFOMAX algorithm for blind signal separation
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2000 On the INFOMAX algorithm for blind signal separation Jiangtao Xi
More informationPOLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS
POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel
More informationBLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES
BLIND SEPARATION OF INSTANTANEOUS MIXTURES OF NON STATIONARY SOURCES Dinh-Tuan Pham Laboratoire de Modélisation et Calcul URA 397, CNRS/UJF/INPG BP 53X, 38041 Grenoble cédex, France Dinh-Tuan.Pham@imag.fr
More informationComplete Blind Subspace Deconvolution
Complete Blind Subspace Deconvolution Zoltán Szabó Department of Information Systems, Eötvös Loránd University, Pázmány P. sétány 1/C, Budapest H-1117, Hungary szzoli@cs.elte.hu http://nipg.inf.elte.hu
More informationBLIND SOURCE SEPARATION IN POST NONLINEAR MIXTURES
BLIND SURCE SEPARATIN IN PST NNLINEAR MIXTURES Sophie Achard, Dinh Tuan Pham Univ of Grenoble Laboratory of Modeling Computation, IMAG, CNRS BP 53X, 384 Grenoble Cedex, France SophieAchard@imagfr, DinhTuanPham@imagfr
More informationPerformance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels
Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering
More informationLinear stochastic approximation driven by slowly varying Markov chains
Available online at www.sciencedirect.com Systems & Control Letters 50 2003 95 102 www.elsevier.com/locate/sysconle Linear stochastic approximation driven by slowly varying Marov chains Viay R. Konda,
More informationADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA. Mark Plumbley
Submitteed to the International Conference on Independent Component Analysis and Blind Signal Separation (ICA2) ADAPTIVE LATERAL INHIBITION FOR NON-NEGATIVE ICA Mark Plumbley Audio & Music Lab Department
More informationBlind Source Separation for Changing Source Number: A Neural Network Approach with a Variable Structure 1
Blind Source Separation for Changing Source Number: A Neural Network Approach with a Variable Structure 1 Shun-Tian Lou, Member, IEEE and Xian-Da Zhang, Senior Member, IEEE Key Lab for Radar Signal Processing,
More informationε ε
The 8th International Conference on Computer Vision, July, Vancouver, Canada, Vol., pp. 86{9. Motion Segmentation by Subspace Separation and Model Selection Kenichi Kanatani Department of Information Technology,
More informationRelative Irradiance. Wavelength (nm)
Characterization of Scanner Sensitivity Gaurav Sharma H. J. Trussell Electrical & Computer Engineering Dept. North Carolina State University, Raleigh, NC 7695-79 Abstract Color scanners are becoming quite
More informationA Subspace Approach to Estimation of. Measurements 1. Carlos E. Davila. Electrical Engineering Department, Southern Methodist University
EDICS category SP 1 A Subspace Approach to Estimation of Autoregressive Parameters From Noisy Measurements 1 Carlos E Davila Electrical Engineering Department, Southern Methodist University Dallas, Texas
More informationError Empirical error. Generalization error. Time (number of iteration)
Submitted to Neural Networks. Dynamics of Batch Learning in Multilayer Networks { Overrealizability and Overtraining { Kenji Fukumizu The Institute of Physical and Chemical Research (RIKEN) E-mail: fuku@brain.riken.go.jp
More information460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses
Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:
More informationLecture 5: Logistic Regression. Neural Networks
Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationSubmitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Co
Submitted to Electronics Letters. Indexing terms: Signal Processing, Adaptive Filters. The Combined LMS/F Algorithm Shao-Jen Lim and John G. Harris Computational Neuro-Engineering Laboratory University
More information