Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Size: px
Start display at page:

Download "Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations"

Transcription

1 Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke, Sherbrooke, Canada Emai: Abstract Hierarchica approaches for representation earning have the abiity to encode reevant features at mutipe scaes or eves of abstraction. However, most hierarchica approaches expoit ony the ast eve in the hierarchy, or provide a mutiscae representation that hods a significant amount of redundancy. We argue that removing redundancy across the mutipe eves of abstraction is important for an efficient representation of compositionaity in object-based representations. With the perspective of feature earning as a data compression operation, we propose a new greedy inference agorithm for hierarchica sparse coding. Convoutiona matching pursuit with a L 0 -norm constraint was used to encode the input signa into compact and non-redundant codes distributed across eves of the hierarchy. Simpe and compex synthetic datasets of tempora signas were created to evauate the encoding efficiency and compare with the theoretica ower bounds on the information rate for those signas. Empirica evidence have shown that the agorithm is abe to infer near-optima codes for simpe signas. However, it faied for compex signas with strong overapping between objects. We expain the inefficiency of convoutiona matching pursuit that occurred in such case. This brings new insights about the NP-hard optimization probem reated to using L 0 -norm constraint in inferring optimay compact and distributed objectbased representations. I. INTRODUCTION Unsupervised earning is very attractive today because of the arge amount of unannotated data avaiabe from socia media and sensors in inteigent devices (e.g. smartphones, autonomous cars, home automation systems). Manifod earning (embedding) is currenty one of the most popuar forms of earning a atent, ow-dimensiona feature map that describes some tempora or spatia properties of the input signa (for review, see [1]). There is, however, very itte work on evauating the quaity of those representations based on compression criteria. The goa of the paper is to investigate how the notion of data compression from both theoretica and empirica perspectives can hep us design, compare and understand better the earning of embeddings that efficienty represent the input structures. This work is in ine with custering by compression (e.g. []) and minimum description ength (MDL) approaches (e.g. [3], [4]) to feature earning and coding, which contrasts with discriminative approaches. Object-based representations are very efficient to encode spatiay or temporay independent structures of a signa, and are in fact very common in the rea word (e.g. puse-resonance sounds in audio, visua composition in images, syntactic items in anguage). Sparse coding (SC) approaches are we adapted to mode such kind of signas (e.g. [14]). We suggest that dictionary-based feature earning and representation with sparse coding coud be comparabe to genera ossess data compression agorithms such as LZ77 [5]. Those agorithms achieve compression by finding redundant sequences in the data, storing a singe copy in a static or siding window dictionary and then use a reference to this copy to repace the mutipe occurrences in the data. This indexing scheme of the dictionary aows the representation to be sparse and compact, meaning that a singe index (or reference to a dictionary eement) describes a arger segment of the input sequence. Most prior work on dictionary earning and optima data compression considers string sequences with discrete symbos (e.g. text documents, DNA sequences), as they are primariy used for fie compression (e.g. [5]). We rather consider sequences of continuous vaues, and propose a new greedy inference agorithm for hierarchica sparse coding. In this work, we focus on the inference process, i.e. finding the optima sets of sparse coefficients given the signa and earned dictionaries. The goa is to evauate how the hierarchica and compression frameworks can improve object-based representations. II. SPARSE CODING Assume a simpe inear decomposition S = XD. The variabe S defines a set of M signas of dimensionaity N, i.e. S = [s 1,..., s M ] R M N. The variabe D defines a dictionary of K bases of dimensionaity N, i.e. D = [d 1,..., d K ] R K N. The resut is a set of coefficients X = [x 1,..., x M ] R M K that represents the input S. Sparse coding (SC) aims at providing a set of sparse coefficients by the penaization of the L 1 norm of X, as shown in Equation 1 (for review, see [6]). The constant α contros the tradeoff between the reconstruction error and the sparsity of the coefficients. (X, D ) = arg min X,D 1 S XD + α X 1 (1) with constraint d k = 1 for k {1,,..., K} Sparse coding aows to compact the energy of the representation into a few high-vaued coefficients whie etting the remaining coefficients sit near zero. Compression is achieved by imiting the amount of information that needs to be stored to represent those coefficients (e.g. by negecting most owvaued coefficients). Sparse coding can be appied to sequences of variabe engths (e.g. tempora signas) by changing the /17/$ IEEE

2 Passthrough feature maps Higher-eve feature maps Distributed representation no strong correation strong correation feature maps greedy eve-wise inference input Fig. 1. Greedy eve-wise inference scheme for distributed representations in hierarchica sparse coding. When the input S is convoved with the dictionary D 1 at eve and greedy inference (shown as white dots) is appied to seect sparse activations in the feature maps, the first-eve sparse representation X std 1 is created. X std 1 now becomes the input signa for the next eve. The second eve can choose during inference to combine severa ower-eve activations as modeed by the dictionary D and output information in the higher-eve feature maps X std, or forward particuar sparse activations as is with the passthrough feature maps X pass if sparse activations in X std 1 do not correate strongy with D. The same process is repeated for the next eves in the hierarchy. From the output of eve, both passthrough and higher-eve features maps describe the input signa in a distributed way using the non-redundant sets of coefficients {X 1, X, X 3 }. The passthrough feature maps aso aow earning over mutipe scaes. Note that the widths of the dictionaries increase over the hierarchy, so that the considered tempora context increases at each eve and thus aows to encode onger time dependencies between sparse activations. matrix mutipication operator into a discrete convoution 1, as shown in Equation. This adds time shift (transation) invariance properties to the dictionary D, and D R K W can now have tempora ength W < N to represent oca features in the signa. Moreover, the direct penaization of the L 0 -norm of X can be used instead. This is more adapted to compression, since ony the information about non-zero coefficients needs to be stored. The drawback with the L 0 -norm is that it makes the optimization probem NP-hard [7]. (X, D ) = arg min X,D 1 S X D + α X 0 () with constraint d k = 1 for k {1,,..., K} Learning representations and encoding the input at a too restricted tempora ength W does not aow to earn highereve objects that derive from the composition of mutipe oca features. This degrades the abiity to compress information. Thus, we propose to efficienty incorporate a hierarchica aspect into the convoutiona sparse coding framework. III. PROPOSED HIERARCHICAL SPARSE CODING Many of the successes in machine earning that ed to the sub-fied of deep earning is about hierarchica processing. This means stacking in a hierarchy severa eves of representations or processing ayers to buid an abstract representation of the input usefu for a particuar task (e.g. cassification, contro). However, many approaches to representation earning (e.g. [1], [8]) ony consider the output of the ast eve and thus ose crucia information about the compositionaity of objects. This means that fine information is mixed together with the higher-eve, coarse information about objects. It aso scaes inefficienty with the context, as the number of possibe objects typicay increases at an exponentia rate when the tempora or physica context is made arger. Some approaches 1 The definition of convoution from machine earning is used, where the fiter is not time-reversed. This convoution is equa to a cross-correation. (e.g. [9]) aggregate encodings at mutipe independent scaes prior to cassification, but disregard the redundancy across the scaes. An efficient hierarchica processing means encoding objects at the eve at which they naturay appear, rather than re-encode those to propagate the information up in the hierarchy. This is simiar to the mutiresoution decomposition scheme of the discrete waveet transform [10], except that hierarchica sparse coding is not constrained to the timefrequency domain anaysis. This avoids the tradeoff between time and frequency resoutions. Hierarchica sparse coding aso contrasts with other methods such as mutistage vector quantization (e.g. [11]), which encode information maximay at each eve and ony pass the residua information (i.e. the reconstruction error) to the next eve in the hierarchy. These methods do not have the abiity to earn high-eve objects, since the objects are encoded into their most basic constituents at the very first eves, and the residua energy decreases rapidy towards zero. This is because ow-eve constituents are typicay easier to earn and encode. The proposed method for hierarchica processing in a sparse coding framework is described in Equation 3. In this hierarchica decomposition scheme, the reconstruction is based on the sets of coefficients {X 1, X,..., X L } from L eves, with inear composition across eves based on their respective dictionaries {D 1, D,..., D L }. The sets of coefficients at a eves do not share redundant information, which makes the representation compact in terms of data compression. Note that the dictionaries are now mutidimensiona arrays (tensors) of rank 3, i.e. D R K W K 1. The dictionary at eve defines a set of K bases (or convoution fiters) of dimensionaity W K 1, where W is the tempora ength of the bases and K 1 is the number of features from the previous eve. The variabe S now defines a (tempora) sequence of ength N t, with features vectors of dimensionaity N s, i.e. S = [s 1,..., s Nt ] R Nt Ns. The set of coefficients X at eve is a mutidimensiona array of rank 3, i.e. X R Nt K 1 K, that are aso caed feature maps.

3 Θ = {X 1, X,..., X L, D 1, D,..., D L } ( ( Θ 1 L = arg min Θ S )) X D k =1 k=1 L + α X 0 (3) =1 with constraint d k, = 1 for k {1,,..., K}, {1,,..., L} and operator as a composition of convoutions: L D = D 1 D D L =1 A. Greedy Inference for Hierarchica Sparse Coding The hierarchica inference scheme is iustrated in figure 1. We extended convoutiona matching pursuit (e.g. [14]) and appied it to the hierarchy in a greedy eve-wise manner. At higher eves of the hierarchy (i.e. > 1), we introduce two types of feature maps defined by X pass and X std : passthrough features maps from the previous eves, and standard feature maps obtained directy at eve from the decomposition with dictionary D. The passthrough features maps aow each eve to forward to the next eve the sparse activations that do not correate we with the bases of dictionary D. This can be impemented by considering during matching pursuit a second dictionary D pass R Kpass W K pass containing a singe Kronecker deta function per dictionary basis so that S D pass = S. This corresponds to copying specific sparse activations from the input to output feature maps X pass. K pass = 1 i=1 K i is the number of aggregated features maps (both passthroughs and standard) from the previous eve. To aow earning features over mutipe scaes, the dictionary D at each eve can aso cover the passthrough feature maps of the previous eve, thus increasing its dimensionaity to D R K W (K pass 1 +K 1). The hierarchica sparse coding inference agorithm SOLVE-HSC is described beow. Note that more sophisticated matching pursuit methods can be integrated in SOLVE-MP, such as ow compexity orthogona matching pursuit (LoCOMP) [13]. The constant β aows to contro the aocation of coefficients in the passthrough feature maps. A. Dataset Generation IV. EXPERIMENT Mutipe synthetic datasets were created to evauate the hierarchica encoding performance in terms of compression. The first dataset named dataset-simpe has no overap between objects in the decomposition and the generated signa when sparse activations are composed in time. The decomposition depends ony on the previous eve, and thus the dictionaries do not use the passthrough feature maps. The second dataset named dataset-compex has no such constraints for overap, function SOLVE-HSC(S, {D 1, D,..., D L}, β) Sove for coefficients at first-eve (no passthrough) Note that X pass 1 wi be an empty coefficient set X pass 1, X std 1 SOLVE-MP(S, D 1, 0) Iterate for a higher eves (with passthrough) for = to L do Concatenate passthrough and standard feature maps, then sove with matching pursuit agorithm X pass, X std SOLVE-MP(X pass 1 Xstd 1, D, β) end for Extract output coefficients sets from eve L X 1, X, X L 1 SPLIT(X pass L ) X L X std L Cacuate the overa residua on the input R S ( ( L )) =1 X k=1 D k return {X 1, X,..., X L}, R end function function SOLVE-MP(S, D, β) Create a passthrough dictionary such that S D pass = S D pass Kronecker deta functions R 1 S Initiaize residua X pass, X std Empty coefficient sets n 1 Iterate over( the residua ) unti convergence at given SNR S whie 10 og F < SNR R n thres do F Cacuate convoution with dictionaries C pass, C std R n D pass, R n D Seect best activated basis for both dictionaries, at respective dictionary index k and center position t, d std arg max C pass, arg max C std Compare coefficients from both seected bases d pass if β C pass > C std then Add coefficient to passthrough set X pass X pass {C pass } Remove contribution ( of seected ) basis to residua R n+1 R n C pass d pass k ese Add coefficient to standard set X std X std {C std } Remove contribution of seected basis to residua R n+1 R n ( ) C std d std k end if n n + 1 end whie return {X pass, X std } end function and is usefu to assess the encoding of compositionaity at mutipe scaes with compexity coser to rea-word signas. The first-eve dictionary bases were generated using 1D Perin noise [1]. Hanning windowing was used to make the bases ocaized in time and avoid sharp discontinuities when overaps occur. Higher-eve dictionary bases were generated by uniformy samping a fixed number Nd of activated subbases from the previous eve. Nd corresponds to the decomposition size at eve. The reative random positions of the seected sub-bases were uniformy samped in time domain within the defined tempora ength. The reative random co-

4 TABLE I SPECIFICATIONS OF THE GENERATED DATASETS. Dataset name L4 Input-eve scaes: dataset-simpe Dictionary sizes (K): Decomposition sizes (N d ): - 3 Input-eve scaes: dataset-compex Dictionary sizes (K): Decomposition sizes (N d ): Ampitude 1.5 Ampitude 1.5 Input-eve scae 56 Input-eve scae Sampes Sampes Abstraction L (a) Dictionaries for dataset-simpe Abstraction L (b) Dictionaries for dataset-compex Fig.. Exampe of dictionary bases at each of the 4 eves in the hierarchy that was used to generate the datasets. The bod bars on the right indicate the tempora span of each basis (input-eve scae), in sampes. As the eve increases, the bases become more compex and abstract. efficients of the seected sub-bases were uniformy samped from the interva [, ]. Tabe I gives the specifications of the dictionaries for the two datasets, where the input-eve scaes, sizes of the dictionaries or decomposition sizes vary across eves. Figure shows some bases of the datasets. For dataset-simpe, the tempora signa was generated by concatenating the input-eve representations of randomy chosen bases across a eves (with equa probabiities), since there is no overapping. For dataset-compex, the tempora sequence was generated from the dictionaries using independent homogeneous Poisson processes to sampe sparse activations (events) in the sets of coefficients {X 1, X, X 3, X 4 }. Each dictionary basis was assigned a Poisson process with fixed rate λ = (in units of events/sampe), chosen so that the generated signa was not temporay too compact or sparse. In both databases, random coefficients for the sparse activations were uniformy samped in the interva [, 4.0]. The ength of the generated signas was 100,000 sampes, with roughy 650 activations distributed across the various eves. Figure 3 shows segments of the datasets. We impemented the hierarchica sparse coding agorithm and dataset generation in Python using the Numpy and Scipy ibraries. B. Evauation Method and Resuts In the experiments, the optima (reference) dictionaries that generated the time series were given to the matching pursuit agorithm. This was to test the optimaity of inference ony, and not dictionary earning. To quantify the performance of the proposed inference agorithm, we evauated the information (a) Segment for dataset-simpe (b) Segment for dataset-compex Fig. 3. Exampe of segments from the datasets. When averaged over a onger time scae, the information rate needed to encode each signa remains constant. rate at the output of the encoder when it is imited to a given maximum eve, and compared with the ower bound I opt for that eve. Equation 4 defines how the optima information rate I opt can be cacuated for both datasets, based on the known coefficient sets that were used to generate the signas, and the known decomposition sizes. Equation 5 describes how the actua information rate I act max can be cacuated from the output coefficients of an encoder at evauation time. Note that there must be toerance on the reconstruction signa-to-noise ratio (SNR) during inference. We assumed 40 db SNR to be sufficient for many types of signas (e.g. images, audio). We used 3 bits foating point precision for the ampitudes of the coefficients (i.e. I fp = 3). I opt = =1 L =+1 X 0 N }{{ t } Activation ( rate at eve I activation at eve ) N X h 0 d N t h =+1 Activation rate transfered from higher-eve h to eve based on decomposition size N h d with I = (og L + og K + og N t ) Indexing bits for eve, basis and time I act max = =1 X 0 N t Activation rate at eve + (4) I max activation at eve I activation at eve + I fp Ampitude bits Figure 4 shows that the weighting factor β can indeed contro the distribution of non-zero coefficients amongst eves. Figure 5 shows that for a hierarchica inference task on dataset-simpe, the framework achieves performance cose to the ower bound of the information rate. It means that mutipe eves of the hierarchy are used effectivey assuming the proper choice of β. If β is too ow, a sparse activations wi be reconstructed from the higher-eve dictionaries, which may not be adapted and thus significanty degrade compression (5)

5 Distribution ratio β weight Leve 4 Leve 3 Leve Leve 1 Fig. 4. Distribution of non-zero coefficients inferred across eves for datasetsimpe, as a function of the weighting factor β. Lower β vaues (e.g. β = 0.75) increased the percentage of higher-eve activations. When β was too high (e.g. β = ), a was eventuay encoded by the first eve ony. Information rate [bit/sampe] Maximum eve max L4 β = 0.75 β = 0.90 β = 0 β =.00 Lower bound Maximum input-eve scae (a) dataset-simpe Information rate [bit/sampe] Maximum eve max L4 β = 0.90 β = 0 β =.00 Lower bound Maximum input-eve scae (b) dataset-compex Fig. 5. Optimaity of the hierarchica inference on the datasets as a function of the maximum eve considered. On dataset-simpe (shown on eft), using the weighting factor β = achieved the best I act performance, near the ower bound defined by I opt. Too ow weighting when β = 0.75 eads to ess efficient encoding with eves. Too high weighting when β =.0 eads to no decrease in information rate I act with eves. On dataset-compex (shown on right), the inference faied to find an efficient encoding for a vaues of β because of the imitations of convoutiona matching pursuit. performance. If β is too high, the input wi be encoded by the first eve ony and propagated up in the hierarchy ony by the passthrough feature maps. The same anaysis on dataset-compex ed to very different resuts when we tested the imits of convoutiona matching pursuit, indicating it can strugge on compex signas when overapping between objects occurs frequenty. For a vaues of β tested, we observed an increase in the information rate as more eves are considered at the output of the encoder. It was observed that using L 0 -norm reguarization during inference woud add noise in the residua that needed to be removed ater, and thus required a significant number of bits to encode. We aso observed simiar resuts (not shown) when using the LoCOMP [13] variant of matching pursuit. Aowing dictionary earning in the optimization process (see Equation 3) coud sove this probem, as the higher-eve dictionaries can be adapted to the noise introduced at inference due to matching pursuit. V. CONCLUSION In this work, we proposed a hierarchica sparse coding agorithm to efficienty encode an input signa into compact and non-redundant codes distributed across eves of the hierarchy. The main idea is to represent features at the abstraction eve at which they naturay appear, rather than re-encode those through the higher-eve features of the hierarchy. With the proposed method, it is possibe to empiricay achieve nearoptima representations in terms of signa compression, as indicated by the information rate needed to transmit the output sparse activations. The experiments highighted the important roe of the greedy inference agorithm at each eve. Poor inference such as aocating too much coefficients because of interference effects directy impacts the information rate, as seen with convoutiona matching pursuit. This effect was observed on compex signas, where the compression abiity vanished because of the interference-reated variabiity. Future work shoud aim at soving this probem that is primariy due to the usage of L 0 -norm reguarization. ACKNOWLEDGMENT The authors woud ike to thank the ERA-NET (CHIST- ERA) and FRQNT organizations for funding this research as part of the European IGLU project. REFERENCES [1] Y. Bengio, A. Courvie, and P. Vincent, Representation earning: A review and new perspectives, IEEE Transactions on Pattern Anaysis and Machine Inteigence, vo. 35, no. 8, pp , 013. [] R. Ciibrasi and P Vitányi, Custering by compression, IEEE Transactions on Information Theory, vo. 51, no. 4, pp , 005. [3] A. Barron, J. Rissanen, and B. Yu, The Minimum Description Length Principe in Coding and Modeing, IEEE Transactions on Information Theory, vo. 44, no. 6, pp , [4] I. Ramirez and G. Sapiro, An MDL framework for sparse coding and dictionary earning, IEEE Transactions on Signa Processing, vo. 60, no. 6, pp , 01. [5] J. Ziv and A. Lempe, A Universa Agorithm for Sequentia Data Compression, IEEE Transactions on Information Theory, vo. 3, no. 3, pp , [6] B. R. Rubinstein, A. M. Bruckstein, and M. Ead, Dictionaries for Sparse Representation Modeing, Proceedings of the IEEE, vo. 98, no. 6, pp , 010. [7] B. K. Natarajan, Sparse Approximate Soutions to Linear Systems, SIAM Journa on Computing, vo. 4, no., pp. 7 34, [8] L. Bo, X. Ren, and D. Fox, Mutipath Sparse Coding Using Hierarchica Matching Pursuit, in IEEE Conference on Computer Vision and Pattern Recognition, pp , 013. [9] K. Yu, Y. Lin, and J. Lafferty, Learning image representations from the pixe eve via hierarchica sparse coding, in IEEE Conference on Computer Vision and Pattern Recognition, pp , 011. [10] S. Maat, Mutiresoution Representations and Waveets, Doctora Dissertation, University of Pennsyvania, [11] B.-H. Juang and A. Gray, Mutipe Stage Vector Quantization for Speech Coding, in IEEE Internationa Conference on Acoustics, Speech, and Signa Processing, no. 3, pp , 198. [1] K. Perin, Improving noise, ACM Transactions on Graphics, vo. 1, no. 3, pp. 3, 00. [13] B. Maihe, R. Gribonva, F. Bimbot, and P. Vandergheynst, A ow compexity Orthogona Matching Pursuit for sparse signa approximation with shift-invariant dictionaries, in IEEE Internationa Conference on Acoustics, Speech and Signa Processing, pp , 009. [14] M. S. Lewicki and T. J. Sejnowski, Coding time-varying signas using sparse, shift-invariant representations, in Advances in Neura Information Processing, pp , 1999.

Convolutional Networks 2: Training, deep convolutional networks

Convolutional Networks 2: Training, deep convolutional networks Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

arxiv: v1 [cs.lg] 31 Oct 2017

arxiv: v1 [cs.lg] 31 Oct 2017 ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

High Spectral Resolution Infrared Radiance Modeling Using Optimal Spectral Sampling (OSS) Method

High Spectral Resolution Infrared Radiance Modeling Using Optimal Spectral Sampling (OSS) Method High Spectra Resoution Infrared Radiance Modeing Using Optima Spectra Samping (OSS) Method J.-L. Moncet and G. Uymin Background Optima Spectra Samping (OSS) method is a fast and accurate monochromatic

More information

II. PROBLEM. A. Description. For the space of audio signals

II. PROBLEM. A. Description. For the space of audio signals CS229 - Fina Report Speech Recording based Language Recognition (Natura Language) Leopod Cambier - cambier; Matan Leibovich - matane; Cindy Orozco Bohorquez - orozcocc ABSTRACT We construct a rea time

More information

FFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection

FFTs in Graphics and Vision. Spherical Convolution and Axial Symmetry Detection FFTs in Graphics and Vision Spherica Convoution and Axia Symmetry Detection Outine Math Review Symmetry Genera Convoution Spherica Convoution Axia Symmetry Detection Math Review Symmetry: Given a unitary

More information

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver

More information

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

Fast Blind Recognition of Channel Codes

Fast Blind Recognition of Channel Codes Fast Bind Recognition of Channe Codes Reza Moosavi and Erik G. Larsson Linköping University Post Print N.B.: When citing this work, cite the origina artice. 213 IEEE. Persona use of this materia is permitted.

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization Scaabe Spectrum ocation for Large Networks ased on Sparse Optimization innan Zhuang Modem R&D Lab Samsung Semiconductor, Inc. San Diego, C Dongning Guo, Ermin Wei, and Michae L. Honig Department of Eectrica

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

Melodic contour estimation with B-spline models using a MDL criterion

Melodic contour estimation with B-spline models using a MDL criterion Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

hole h vs. e configurations: l l for N > 2 l + 1 J = H as example of localization, delocalization, tunneling ikx k

hole h vs. e configurations: l l for N > 2 l + 1 J = H as example of localization, delocalization, tunneling ikx k Infinite 1-D Lattice CTDL, pages 1156-1168 37-1 LAST TIME: ( ) ( ) + N + 1 N hoe h vs. e configurations: for N > + 1 e rij unchanged ζ( NLS) ζ( NLS) [ ζn unchanged ] Hund s 3rd Rue (Lowest L - S term of

More information

Computing Spherical Transform and Convolution on the 2-Sphere

Computing Spherical Transform and Convolution on the 2-Sphere Computing Spherica Transform and Convoution on the 2-Sphere Boon Thye Thomas Yeo ythomas@mit.edu May, 25 Abstract We propose a simpe extension to the Least-Squares method of projecting sampes of an unknown

More information

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes A Fundamenta Storage-Communication Tradeoff in Distributed Computing with Stragging odes ifa Yan, Michèe Wigger LTCI, Téécom ParisTech 75013 Paris, France Emai: {qifa.yan, michee.wigger} @teecom-paristech.fr

More information

Structured sparsity for automatic music transcription

Structured sparsity for automatic music transcription Structured sparsity for automatic music transcription O'Hanon, K; Nagano, H; Pumbey, MD; Internationa Conference on Acoustics, Speech and Signa Processing (ICASSP 2012) For additiona information about

More information

ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK

ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK ESTIMATION OF SAMPLING TIME MISALIGNMENTS IN IFDMA UPLINK Aexander Arkhipov, Michae Schne German Aerospace Center DLR) Institute of Communications and Navigation Oberpfaffenhofen, 8224 Wessing, Germany

More information

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1 Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Centralized Coded Caching of Correlated Contents

Centralized Coded Caching of Correlated Contents Centraized Coded Caching of Correated Contents Qianqian Yang and Deniz Gündüz Information Processing and Communications Lab Department of Eectrica and Eectronic Engineering Imperia Coege London arxiv:1711.03798v1

More information

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS FORECASTING TEECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODES Niesh Subhash naawade a, Mrs. Meenakshi Pawar b a SVERI's Coege of Engineering, Pandharpur. nieshsubhash15@gmai.com

More information

Target Location Estimation in Wireless Sensor Networks Using Binary Data

Target Location Estimation in Wireless Sensor Networks Using Binary Data Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

Haar Decomposition and Reconstruction Algorithms

Haar Decomposition and Reconstruction Algorithms Jim Lambers MAT 773 Fa Semester 018-19 Lecture 15 and 16 Notes These notes correspond to Sections 4.3 and 4.4 in the text. Haar Decomposition and Reconstruction Agorithms Decomposition Suppose we approximate

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance adar/ racing of Constant Veocity arget : Comparison of Batch (LE) and EKF Performance I. Leibowicz homson-csf Deteis/IISA La cef de Saint-Pierre 1 Bd Jean ouin 7885 Eancourt Cede France Isabee.Leibowicz

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Appendix A: MATLAB commands for neural networks

Appendix A: MATLAB commands for neural networks Appendix A: MATLAB commands for neura networks 132 Appendix A: MATLAB commands for neura networks p=importdata('pn.xs'); t=importdata('tn.xs'); [pn,meanp,stdp,tn,meant,stdt]=prestd(p,t); for m=1:10 net=newff(minmax(pn),[m,1],{'tansig','purein'},'trainm');

More information

Emmanuel Abbe Colin Sandon

Emmanuel Abbe Colin Sandon Detection in the stochastic bock mode with mutipe custers: proof of the achievabiity conjectures, acycic BP, and the information-computation gap Emmanue Abbe Coin Sandon Abstract In a paper that initiated

More information

Automobile Prices in Market Equilibrium. Berry, Pakes and Levinsohn

Automobile Prices in Market Equilibrium. Berry, Pakes and Levinsohn Automobie Prices in Market Equiibrium Berry, Pakes and Levinsohn Empirica Anaysis of demand and suppy in a differentiated products market: equiibrium in the U.S. automobie market. Oigopoistic Differentiated

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

Technical Appendix for Voting, Speechmaking, and the Dimensions of Conflict in the US Senate

Technical Appendix for Voting, Speechmaking, and the Dimensions of Conflict in the US Senate Technica Appendix for Voting, Speechmaking, and the Dimensions of Confict in the US Senate In Song Kim John Londregan Marc Ratkovic Juy 6, 205 Abstract We incude here severa technica appendices. First,

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea-Time Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

A Robust Voice Activity Detection based on Noise Eigenspace Projection

A Robust Voice Activity Detection based on Noise Eigenspace Projection A Robust Voice Activity Detection based on Noise Eigenspace Projection Dongwen Ying 1, Yu Shi 2, Frank Soong 2, Jianwu Dang 1, and Xugang Lu 1 1 Japan Advanced Institute of Science and Technoogy, Nomi

More information

Supervised i-vector Modeling - Theory and Applications

Supervised i-vector Modeling - Theory and Applications Supervised i-vector Modeing - Theory and Appications Shreyas Ramoji, Sriram Ganapathy Learning and Extraction of Acoustic Patterns LEAP) Lab, Eectrica Engineering, Indian Institute of Science, Bengauru,

More information

Section 6: Magnetostatics

Section 6: Magnetostatics agnetic fieds in matter Section 6: agnetostatics In the previous sections we assumed that the current density J is a known function of coordinates. In the presence of matter this is not aways true. The

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

Study on Fusion Algorithm of Multi-source Image Based on Sensor and Computer Image Processing Technology

Study on Fusion Algorithm of Multi-source Image Based on Sensor and Computer Image Processing Technology Sensors & Transducers 3 by IFSA http://www.sensorsporta.com Study on Fusion Agorithm of Muti-source Image Based on Sensor and Computer Image Processing Technoogy Yao NAN, Wang KAISHENG, 3 Yu JIN The Information

More information

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1654 March23, 1999

More information

Optimal Control of Assembly Systems with Multiple Stages and Multiple Demand Classes 1

Optimal Control of Assembly Systems with Multiple Stages and Multiple Demand Classes 1 Optima Contro of Assemby Systems with Mutipe Stages and Mutipe Demand Casses Saif Benjaafar Mohsen EHafsi 2 Chung-Yee Lee 3 Weihua Zhou 3 Industria & Systems Engineering, Department of Mechanica Engineering,

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

LINK BETWEEN THE JOINT DIAGONALISATION OF SYMMETRICAL CUBES AND PARAFAC: AN APPLICATION TO SECONDARY SURVEILLANCE RADAR

LINK BETWEEN THE JOINT DIAGONALISATION OF SYMMETRICAL CUBES AND PARAFAC: AN APPLICATION TO SECONDARY SURVEILLANCE RADAR LINK BETWEEN THE JOINT DIAGONALISATION OF SYMMETRICAL CUBES AND PARAFAC: AN APPLICATION TO SECONDARY SURVEILLANCE RADAR Nicoas PETROCHILOS CReSTIC, University of Reims Mouins de a Housse, BP 1039 51687

More information

Active Learning & Experimental Design

Active Learning & Experimental Design Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection

More information

BDD-Based Analysis of Gapped q-gram Filters

BDD-Based Analysis of Gapped q-gram Filters BDD-Based Anaysis of Gapped q-gram Fiters Marc Fontaine, Stefan Burkhardt 2 and Juha Kärkkäinen 2 Max-Panck-Institut für Informatik Stuhsatzenhausweg 85, 6623 Saarbrücken, Germany e-mai: stburk@mpi-sb.mpg.de

More information

PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES

PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES C.A.L. Baier-Jones, T.J. Sabin, D.J.C. MacKay, P.J. Withers Department of Materias Science and

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm 1 Asymptotic Properties of a Generaized Cross Entropy Optimization Agorithm Zijun Wu, Michae Koonko, Institute for Appied Stochastics and Operations Research, Caustha Technica University Abstract The discrete

More information

SVM-based Supervised and Unsupervised Classification Schemes

SVM-based Supervised and Unsupervised Classification Schemes SVM-based Supervised and Unsupervised Cassification Schemes LUMINITA STATE University of Pitesti Facuty of Mathematics and Computer Science 1 Targu din Vae St., Pitesti 110040 ROMANIA state@cicknet.ro

More information

Efficient Visual-Inertial Navigation using a Rolling-Shutter Camera with Inaccurate Timestamps

Efficient Visual-Inertial Navigation using a Rolling-Shutter Camera with Inaccurate Timestamps Efficient Visua-Inertia Navigation using a Roing-Shutter Camera with Inaccurate Timestamps Chao X. Guo, Dimitrios G. Kottas, Ryan C. DuToit Ahmed Ahmed, Ruipeng Li and Stergios I. Roumeiotis Mutipe Autonomous

More information

arxiv: v1 [quant-ph] 4 Nov 2008

arxiv: v1 [quant-ph] 4 Nov 2008 Training a Binary Cassifier with the Quantum Adiabatic Agorithm Hartmut Neven Googe, neven@googe.com arxiv:811.416v1 [quant-ph] 4 Nov 28 Vasi S. Denchev Purdue University, vdenchev@purdue.edu Geordie Rose

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, we as various

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

Effective Appearance Model and Similarity Measure for Particle Filtering and Visual Tracking

Effective Appearance Model and Similarity Measure for Particle Filtering and Visual Tracking Effective Appearance Mode and Simiarity Measure for Partice Fitering and Visua Tracking Hanzi Wang David Suter and Konrad Schinder Institute for Vision Systems Engineering Department of Eectrica and Computer

More information

Fitting Algorithms for MMPP ATM Traffic Models

Fitting Algorithms for MMPP ATM Traffic Models Fitting Agorithms for PP AT Traffic odes A. Nogueira, P. Savador, R. Vaadas University of Aveiro / Institute of Teecommunications, 38-93 Aveiro, Portuga; e-mai: (nogueira, savador, rv)@av.it.pt ABSTRACT

More information

A Better Way to Pretrain Deep Boltzmann Machines

A Better Way to Pretrain Deep Boltzmann Machines A Better Way to Pretrain Deep Botzmann Machines Rusan Saakhutdino Department of Statistics and Computer Science Uniersity of Toronto rsaakhu@cs.toronto.edu Geoffrey Hinton Department of Computer Science

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, as we as various

More information

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value ISSN 1746-7659, Engand, UK Journa of Information and Computing Science Vo. 1, No. 4, 017, pp.91-303 Converting Z-number to Fuzzy Number using Fuzzy Expected Vaue Mahdieh Akhbari * Department of Industria

More information

Unconditional security of differential phase shift quantum key distribution

Unconditional security of differential phase shift quantum key distribution Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice

More information

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient

More information

arxiv: v1 [math.fa] 23 Aug 2018

arxiv: v1 [math.fa] 23 Aug 2018 An Exact Upper Bound on the L p Lebesgue Constant and The -Rényi Entropy Power Inequaity for Integer Vaued Random Variabes arxiv:808.0773v [math.fa] 3 Aug 08 Peng Xu, Mokshay Madiman, James Mebourne Abstract

More information

Seasonal Time Series Data Forecasting by Using Neural Networks Multiscale Autoregressive Model

Seasonal Time Series Data Forecasting by Using Neural Networks Multiscale Autoregressive Model American ourna of Appied Sciences 7 (): 372-378, 2 ISSN 546-9239 2 Science Pubications Seasona Time Series Data Forecasting by Using Neura Networks Mutiscae Autoregressive Mode Suhartono, B.S.S. Uama and

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea- Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process Management,

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

From Margins to Probabilities in Multiclass Learning Problems

From Margins to Probabilities in Multiclass Learning Problems From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008 Random Booean Networks Barbara Drosse Institute of Condensed Matter Physics, Darmstadt University of Technoogy, Hochschustraße 6, 64289 Darmstadt, Germany (Dated: June 27) arxiv:76.335v2 [cond-mat.stat-mech]

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Schoo of Computer Science Probabiistic Graphica Modes Gaussian graphica modes and Ising modes: modeing networks Eric Xing Lecture 0, February 0, 07 Reading: See cass website Eric Xing @ CMU, 005-07 Network

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

arxiv: v1 [math.ca] 6 Mar 2017

arxiv: v1 [math.ca] 6 Mar 2017 Indefinite Integras of Spherica Besse Functions MIT-CTP/487 arxiv:703.0648v [math.ca] 6 Mar 07 Joyon K. Boomfied,, Stephen H. P. Face,, and Zander Moss, Center for Theoretica Physics, Laboratory for Nucear

More information

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT Söerhaus-Workshop 2009 October 16, 2009 What is HILBERT? HILBERT Matab Impementation of Adaptive 2D BEM joint work with M. Aurada, M. Ebner, S. Ferraz-Leite, P. Godenits, M. Karkuik, M. Mayr Hibert Is

More information

Reliability: Theory & Applications No.3, September 2006

Reliability: Theory & Applications No.3, September 2006 REDUNDANCY AND RENEWAL OF SERVERS IN OPENED QUEUING NETWORKS G. Sh. Tsitsiashvii M.A. Osipova Vadivosto, Russia 1 An opened queuing networ with a redundancy and a renewa of servers is considered. To cacuate

More information

Progressive Correction for Deterministic Dirac Mixture Approximations

Progressive Correction for Deterministic Dirac Mixture Approximations 4th Internationa Conference on Information Fusion Chicago, Iinois, USA, Juy 5-8, Progressive Correction for Deterministic Dirac Miture Approimations Patrick Ruoff, Peter Krauthausen, and Uwe D. Hanebeck

More information

Chemical Kinetics Part 2

Chemical Kinetics Part 2 Integrated Rate Laws Chemica Kinetics Part 2 The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates the rate

More information