Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Size: px
Start display at page:

Download "Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models"

Transcription

1 2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, Eail: Andrew R. Barron Departent of Statistics Yale University New Haven, CT, Eail: Abstract This paper considers coding and predicting sequences of rando variables generated fro a large alphabet. We start fro the i.i.d odel and propose a siple coding distribution forulated by a product of tilted Poisson distributions which achieves close to optial perforance. Then we extend to Markov odels, and in particular, tree sources. A context tree based algorith is designed according to the frequency of various contexts in the data. It is a greedy algorith which seeks for the greatest savings in codelength when constructing the tree. Copression and prediction of individual counts associated with the contexts again uses a product of tilted Poisson distributions. Ipleenting this ethod on a Chinese novel, about 20.56% savings in codelength is achieved copared to the i.i.d odel. I. INTRODUCTION Large alphabet data copression does not have the luxury of diinishing per sybol redundancy. Luckily, sybols in a large alphabet are usually not equally probable in practice. For exaple, in Chinese characters, a subset of 964 characters covers 90% inputs in Chinese[1] though the vocabulary size is no saller than 106,230 in total [2]. Coding and prediction of strings of rando variables generated fro an i.i.d odel have been considered for the large alphabet setting with the restriction that the ordered probability or count list rapidly decreasing [3], or satisfies an envelope class property [4]. Although this i.i.d odel is not the best for copression or prediction when there is dependence between successive characters, it serves as an analytical tool that ore coplicated odels can be based on, and helps understand the behavior of coding and predictive distributions. In this paper, we propose a siple coding ethod particularly applicable for large alphabet copression and prediction probles, which also perfors alost optially. Suppose a string of rando variables X =(X 1,...,X N ) is generated independently fro a discrete alphabet A of size. We allow the string length N to be variable. A special case is when N is given as a fixed nuber, or it can be rando. In either case, X is a eber of the set X of all finite length strings 1[ X = X n = n=0 1[ {x n =(x 1,...,x n ):x i 2A,i=1,...,n}. n=0 Our goal is to code/predict the string X. Now suppose given N, each rando variable X i is generated independently according to a probability ass function in a paraetric faily P = {P (x) : 2 R } on A. Thus ny P (X 1,...,X N N = n) = P (X i ) for n = 1, 2,... We are interested in the class of all distributions with P (j) = j paraeterized by the siplex ={ =( 1,..., ): j 0, P j=1 j =1,j=1,...,}. Let N = (N 1,...,N ) denote the vector of counts for sybol 1,...,. The observed saple size N is the su of the counts N = P j=1 N j. Both P (X) and P (X N = n) have factorizations based on the distribution of the counts and i=1 P (X N =n) =P (X N) P (N N =n), P (X) = P (X N) P (N). The first factor of the two equations is the unifor distribution on the set of strings with given counts, which does not depend on. The vector of counts N fors a sufficient statistic for. Modeling the distribution of the counts is essential for foring codes and predictions. In the particular case of all i.i.d. distributions paraeterized by the siplex, the distribution P (N N = n) is the ultinoial(n, ) distribution. In the above, there is a need for a distribution of the total count N. Of particular interest is the case that the total count is taken to be Poisson, because then the resulting distribution of individual counts akes the independent. Poisson sapling is a standard technique to siplify analysis [5][6]. Here we give particular attention to the target faily P = {P (N): j 0, j=1,...,}, in which P (N) is the product of Poisson( j ) distribution for N j,j=1,...,. It akes the total count N Poisson( su ) with su = P j=1 j and yields the ultinoial(n, ) distribution by conditioning on N = n, where j = j / su. And the induced distribution on X is P (X) =P (X N)P (N) /14/$ IEEE 2504

2 2014 IEEE International Syposiu on Inforation Theory Ideally the true probability distribution P (X) could be used to code the sequence, if were known. The regret induced by using instead of P is R(, P,X) = 1 (X) 1 P (X), where is arith base 2 (We don t worry about integer constraint). Here we can construct by choosing a probability distribution for the counts and then use the unifor distribution for the distribution of strings given the counts, written as P unif. That is (X) =P unif (X N)(N). (1) Then the regret becoes R(, P,X) = P (N) (N) = R(, P,N). However, in reality is usually unknown. Given the faily P, we could instead use the best candidate with hindsight Pˆ(X) = ax 2 (P (X)) (corresponding to in 2 (1/P (X))), and copare it to (X). The axiization is equivalent to axiizing for the count probability, as the unifor distribution dose not depend on, i.e. ax 2 (P (X)) = P unif (X N) ax 2 P (N) = P unif (X N) Pˆ(N). Then the proble becoes: given the faily P, how to choose to iniize the axiized regret in ax X R(, Pˆ,X)=in ax N Pˆ(N) (N). For the regret, the axiu can be restricted to a set of counts instead of the whole space. A traditional choice being S,n = {(N 1,...,N ): P j=1 N j =n, N j 0,j=1,...,} associated with a given saple size n, in which case the iniax regret is in ax Pˆ(N) N2S,n (N). As is failiar in universal coding [7][8], the noralized axiu likelihood (NML) distribution nl (N) = Pˆ(N) C(S,n ) is P the unique pointwise iniax strategy when C(S,n )= N2S,n Pˆ(N) is finite, and C(S,n ) is the iniax value. When is large, the NML distribution can be unwieldy to copute for copression or prediction. Instead we will introduce a slightly suboptial coding distribution that akes the counts independent and show that it is nearly optial for every S,n 0 with n 0 not too different fro a target n. Indeed, we advocate that our siple coding distribution is preferable to use coputationally when is large even if the saple size n were known in advance. To produce our desired coding distribution we ake use of two basic principles. One is that the ultinoial faily of distributions on counts atches the conditional distribution of N 1,...,N given the su N when unconditionally the counts are independent Poisson. Another is the inforation theory principle [9][10][11] that the conditional distribution given a su (or average) of a large nuber of independent rando variables is approxiately a product of distributions, each of which is the one closest in relative entropy to the unconditional distribution subject to an expectation constraint. This iniu relative entropy distribution is an exponential tilting of the unconditional distribution. N In the Poisson faily with distribution j j e j /N j!, exponential tilting (ultiplying by the factor e anj ) preserves the Poisson faily (with the paraeter scaled to je a ). Those distributions continue to correspond to the ultinoial distribution (with paraeters j = j / su ) when conditioning on the su of counts N. A particular choice of a =ln( su /N ) provides the product of Poisson distributions closest to the ultinoial in regret. Here for universal coding, we find the tilting of individual axiized likelihood that akes the product of such closest to the Shtarkov s NML distribution. This greatly siplifies the task of approxiate optial universal copression and the analysis of its regret. Indeed, applying the axiu likelihood step to a Poisson count k produces a axiized likelihood value of M(k) = k k e k /k!. We call this axiized likelihood the Stirling ratio, as it is the quantity that Stirling s approxiation shows near (2 k) 1/2 for k not too sall. We find that this M(k) plays a distinguished role in universal large alphabet copression, even for sequences with sall counts k. This easure M has a product extension to counts N 1,...,N, M(N) =M(N 1 ) M(N ). Although M has an infinite su by itself, it is noralizable when tilted for every positive a. The tilted Stirling ratio distribution is P a (N j )= N Nj j e Nj e anj, (2) N j! C a with the noralizer C a = P 1 k=0 kk e (1+a)k /k!. The coding distribution we propose and analyze is siply the product of those tilted one-diensional axiized Poisson likelihood distributions for a value of a we will specify later a (N) =P a (N) =P a (N 1 ) P a (N ). If it is known that the total count is n, then the regret is a siple function of n and the noralizer C a. The choice of the P tilting paraeter a given by the oent condition E a j=1 N j = n iniizes the regret over all positive a. Moreover, value of a depends only on the ratio between the size of the alphabet and the total count /n. Fig. 1 displays a as a function of /n solved nuerically. Given an alphabet with sybols and a string generated fro it of length n, one can look at the plot and find the a desired according to the /n given, and then use the a to code the data. 2505

3 2014 IEEE International Syposiu on Inforation Theory a * /n Fig. 1. Relationship between a and n. As copared to i.i.d class, Markov sources are uch richer and ore realistic. Suppose given N, each rando variable X i is generated according to a probability ass function depending on its context (string of sybols preceding it). Following Willies notations in [12], a tree source can be deterined by a context set S. Eleents of S are strings of sybols fro A or concatenation of others and prefixes of the contexts. others represents copleents of the contexts in S with a coon prefix. Prefix of a context is the part except the last sybol. For exaple, the prefix for s = ab is a, and for s = a equals nothing. The collection of distributions is P S = {P s (x) : s 2 S R,s 2S}, where S is the paraeter set defined later. For siplicity, we require the order of the odel no larger than T 2{0, 1, 2,...}, so S2C T, where C T is the class of tree sources with order T or less. For each context s 2S with a given S, let sx denote the probability of sybol x 2Ashowing up after s, for all x 2A. Then s =( s1,..., s ) lies in the set S = { s =( s1,..., s ):x 2A, sx 0, X x2a sx =1}. Again, we could take advantage of factorizations based on the distribution of the counts N S =(N s ) s2s, where N s = (N s1,...,n s ) is the count for all sybols given context s 2S and P (X N =n) =P (X N S ) P (N S N =n), P (X) =P (X N S ) P (N S ). Picking the distribution for the total count to be Poisson again leads to the target faily P S = {P (N S ): sj 0, j=1,...,, s 2S}, in which P (N S ) is the product of Poisson( sj ) distribution for N sj,j=1,..., and s 2S. And the induced distribution on X is P (X) =P (X N S )P (N S ). Fig. 2. An exaple context tree. There are two sources of cost involved in using a tree odel. One is the coding cost C(X) for variables given the context. Another is the description cost D(S) for describing the tree. Overall, we want to find which uses shorter codelength for sequences generated fro an unknown tree source S2C T. That is, to iniize ax ax S2C T X2S C(S,X) = ax S2C T ax X2S (C(X)+D(S)). for a given set of sequences P P S. One choice of S could be S,n,S = {N S : s2s j=1 N sj = n, N sj 0,j = 1,...,, s 2S}associated with a given saple size n. We use the sae coding distribution as given in equation (2) for count variables conditional on each given context s. The coding distribution for the counts given s is siply the product as (N s ) =P a s (N s ) =P as (N s1 ) P as (N s ), (3) with a properly chosen a s for each context s 2S. The tilting paraeter a s can be chosen according to the ratio of alphabet size and the context count N s = P j=1 N sj for all s 2 S [3]. Using the tilted distribution P as as a coding distribution, the regret is siply a su of the individual regrets. It is clear that when the context set S is provided, the coding distribution and the regret it induces is easy to obtain. Yet the questions is how to construct the context set to begin with. We adopt a ethod siilar to Rissanen s approach in [13]. Different fro the i.i.d odel in which regret can be solely relied on to evaluate the perforance, for Markov odels different targets associated with different odels need to be taken account of. Hence, we focus on the total codelength C(S,X) and use it as a standard to evaluate the perforance of different odels and coding distributions. We use a greedy algorith to build the context tree with details discussed in Section III-C. An illustrative exaple tree with a siple alphabet A = {a, b, c, d} is given in Fig.2. II. I.I.D CLASS Let S be any set of counts, then the axiized regret of using as a coding strategy given a class P of distributions when the vector of counts is restricted to S is R(, P,S) = ax ax P 2P P (N). N2S (N) 2506

4 2014 IEEE International Syposiu on Inforation Theory Theore 1: Let P a be the distribution specified in equation (2) (Poisson axiized likelihood, tilted and noralized). The regret of using a product of tilted distributions a = j=1 P a for a given vector of counts N =(N 1,...,N ) is R ( a, P,N)=aN e + C a. Let S,n be the set of count vectors with total count n be defined as before, then R ( a, P,S,n )=an e + C a. (4) Let a be the choice of a satisfying the following oent condition E Pa X j=1 N j = E Pa N 1 = n. (5) Then a is the iniizer of the regret in expression (4). Write R,n =in a R( a, P,S,n). When = o(n), the R,n is near ne 2 in the following sense. d 1 2 e apple R ne,n 2 r apple (1 + ), (6) n where d 1 = O ( n )1/3. When n = o(), the R,n is near n ne in the following sense. 1+(1 d 2 ) n apple R,n n ne apple 1+ n + d 3 (7) n 2 e 2 ( ne). where d 2 = O( n ), and d 3 = 1 2 p When n = b, the R,n = c, where the constant c = a b e + C a, and a is such that E Pa N 1 = b. Proof: The expression of the regret is fro the definition. The fact that a is the iniizer can be seen by taking partial derivative with respect to a of expression (4). Details of proof can be found in [3]. Reark 1: The regret depends only on the nuber of paraeters, the total counts n and the tilting paraeter a. The optial tilting paraeter is given by a siple oent condition in equation (5). Reark 2: The regret R,n is close to the iniax level in all three cases listed in Theore 1. The ain ters in the = o(n) and n = o() cases are the sae as the iniax regret given in [14] except the ultiplier for (ne/) here is /2 instead of ( 1)/2 for the sall scenario. For the n = b case, the R,n is close to the iniax regret in [14] nuerically. Corollary 1: Let P be a faily of ultinoial distributions with total count n. Then the axiized regret R( a, P,S,n) has an upper bound within n above the upper bound in Theore 1. Proof: This can be easily seen by the following equation Pˆ(X 1,..., X N N = n) P unif (X 1,..., X n N 1,..., N ) a (N 1,..., N ) j=1 = M(N j) (N = n) Pˆsu a (N 1,...,N ) ' n + j=1 M(N j) a (N 1,...,N ). (8) Here ˆsu = n, hence the ter n is Stirling s approxiation of (N = n). 1/Pˆsu Reark 3: The n arises because here includes description of the total N while the ore restrictive target regards it as given. A. Coding cost III. TREE SOURCE For each context s 2S, we use a product of tilted Stirling ratio distributions with paraeter a s as in equation (3) to code the vector of counts N s. The coding distribution is the product of all the as (N s ), i.e. S a (N S )= Y s2s as (N s ). Corollary 2: Using independent tilted Stirling ratio distribution S a to code the counts in S,n,S, the regret equals R(P S, S a,s,n,s )= X (a s N s e + C as ). s2s This can be easily seen by applying the definition. B. Description cost To describe a given context set S, we use the following rule D(S) =1+N branches (1 + A ), where N branches is the nuber of labeled branches in the tree. Here labeled eans a specified sybol in the alphabet. For instance, N branches =5in the exaple tree. The first bit is used to describe if the odel is nondegenerate (i.i.d or Markov). For each branch other than others, we could use bits to convey which sybol it is, and then another 1 bit to say if it is nondegenerate. Our exaple tree uses 1 + 5(1 + 4) = 16 bits. C. Using codelength to construct the tree Here we use the exaple in Fig.2 to describe how we construct the tree. Starting fro the root which conditions on nothing (the i.i.d odel), we choose the first sybol (c) that produces the ost savings (if any) in codelength to condition on. Next, we consider two possible ways to expand the tree: one is another sybol (a) that akes the ost savings; the other is to further condition on one ore sybol (b) preceding the first found one (c). After calculating and coparing savings produced by these two candidates, we then decide which one to pick again by choosing the one with the greatest savings. Continue these procedures until conditioning provides no ore 2507

5 2014 IEEE International Syposiu on Inforation Theory!!!!!!!!!!!!!!!! others!!!!!!! Fig. 3. Context tree for Fortress Beseiged. savings or the axiu nuber of sybols to condition on (T ) is reached, we have the context tree built. At each layer, the label others represents all other sybols that are not picked up in that layer, for exaple, others includes b and d in the first layer in the exaple. When expanding one branch fro the tree, we are conditioning on one ore sybol. This leads to a different target odel. We calculate the coding cost by adding up the iniized codelength for the target odel and the iniax regret by using Szpankowski s approxiation [14], because the iniized regret of using the tilted Stirling ratio distribution is very close to the iniax level. Then the overall codelength is used for coparison in tree construction. IV. A REAL EXAMPLE We apply the proposed ethod to a conteporary Chinese novel naed and translated as Fortress Besieged [15]. The book contains 216,601 characters in total, and it is encoded in GB18030, the largest official Chinese character set which contains 70,244 characters [16]. We start fro the i.i.d odel which uses 1,954,777 bits. The first single character to condition on is, which produces 12,631 bits of savings. It is actually the first character of the protagonist s first nae in the novel. It is indeed not surprising that seeing the first character highly indicates the next thing to see is the second for a two-character nae. We restrict the order of the Markov odel to be no larger than 5, but in fact no context exceeding two characters is chosen. There are 342 branches in the tree, aong which 95 are in the first layer, and 5 of the extends to the second layer. In fact, second layer branches are picked up only after ost first layer branches have already been chosen. A sall part of the tree is displayed in Fig.3. The dots on the right stand for the rest of the odel that cannot be shown. And the blank cell in the iddle of the first layer is actually the space sybol. The total savings aounts to 401,922 bits (about 20.56%) as copared to the i.i.d odel. Please note that existing odels for tree sources are ostly designed for sall alphabet copression, hence direct coparison with which would not be quite fair. V. CONCLUSION We consider a copression and prediction proble under both large alphabet i.i.d odel and bounded tree odels, and design a ethod to construct the context tree. Cobining this ethod with tilted Stirling ratio distribution as coding distributions for sybols with given contexts, we have a convenient and efficient way for copression and prediction. Ipleenting the proposed ethod on a Chinese novel, considerable savings in codelength is produced copared to the i.i.d odel. ACKNOWLEDGMENT The authors would like to thank Prof. Jun ichi Takeuchi, Prof. Wojciech Szpankowski and Prof. Narayana Santhana. Furtherore, we thank for the invitation to present part of the work in The Sixth Workshop on Inforation Theoretic Methods in Science and Engineering (Tokyo, Aug, 2013) and Center for Science of Inforation at Purdue University (West Lafayette, Oct, 2013). Soe ideas of the subsequent work are based on feedback fro these presentations. REFERENCES [1] I. of Applied Linguistice Ministry of Education. (2008, Nov) 2007 report on language use in china. [Online]. Available: / htl [2] Wikipedia, Chinese characters, Deceber [Online]. Available: [3] X. Yang and A. Barron, Large alphabet copression and predictive distributions through poissonization and tilting, arxiv: v1 [stat.me], [4] A. G. S. Boucheron and E. Gassiat, Coding on countably infinite alphabets, IEEE Transactions on Inforation Theory, vol. 55, no. 1, Jan [5] W. Feller, An Introduction to Probability Theory and Its Applications. Wiley, 1950, vol. 1. [6] H. D. J. Archarya, Tight bounds for universal copression of large alphabets, IEEE International Syposiu on Inforation Theory, [7] Y. M. Shtarkov, Universal sequential coding of single essages, Probles of Inforation Transission, vol. 23, no. 3, pp , [8]. Xie and A. R. Barron, Miniax redundancy for the class of eoryless sources, IEEE Transactions on Inforation Theory, vol. 43, pp , May [9] I. Csiszar, I-divergence geoetry of probability distributions and iniization probles, The Annals of Probability, vol. 3, no. 1, pp , Feb [10], Sanov property, generalized I-projection and a conditional liit theore, The Annals of Probability, vol. 12, no. 3, pp , Jan [11] J. V. Capenhout and T. Cover, Maxiu entropy and conditional probability, IEEE Transactions on Inforation Theory, vol. 27, no. 4, July [12] F. M. J. Willes, Y. Shtarkov, and T. Tjalkens, The context-tree weighting ethod: basic properties, IEEE Transactions on Inforation Theory, vol. 41, no. 3, pp , [13] J. Rissanen, A Universal Data Copression Syete, IEEE Transactions on Inforation Theory, vol. 29, no. 5, pp , [14] W. Szpankowski and M. J. Weinberger, Miniax redundancy for large alphabets, Inforation Theory Proceedings, June [15] Wikipedia, Fortress beseiged, October [Online]. Available: Besieged [16], Gb18030 stardard, January [Online]. Available:

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Fast Structural Similarity Search of Noncoding RNAs Based on Matched Filtering of Stem Patterns

Fast Structural Similarity Search of Noncoding RNAs Based on Matched Filtering of Stem Patterns Fast Structural Siilarity Search of Noncoding RNs Based on Matched Filtering of Ste Patterns Byung-Jun Yoon Dept. of Electrical Engineering alifornia Institute of Technology Pasadena, 91125, S Eail: bjyoon@caltech.edu

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

An Algorithm for Quantization of Discrete Probability Distributions

An Algorithm for Quantization of Discrete Probability Distributions An Algorith for Quantization of Discrete Probability Distributions Yuriy A. Reznik Qualco Inc., San Diego, CA Eail: yreznik@ieee.org Abstract We study the proble of quantization of discrete probability

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1. Notes on Coplexity Theory Last updated: October, 2005 Jonathan Katz Handout 7 1 More on Randoized Coplexity Classes Reinder: so far we have seen RP,coRP, and BPP. We introduce two ore tie-bounded randoized

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

Bounds on the Minimax Rate for Estimating a Prior over a VC Class from Independent Learning Tasks

Bounds on the Minimax Rate for Estimating a Prior over a VC Class from Independent Learning Tasks Bounds on the Miniax Rate for Estiating a Prior over a VC Class fro Independent Learning Tasks Liu Yang Steve Hanneke Jaie Carbonell Deceber 01 CMU-ML-1-11 School of Coputer Science Carnegie Mellon University

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Error Exponents in Asynchronous Communication

Error Exponents in Asynchronous Communication IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels

Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels ISIT7, Nice, France, June 4 June 9, 7 Upper and Lower Bounds on the Capacity of Wireless Optical Intensity Channels Ahed A. Farid and Steve Hranilovic Dept. Electrical and Coputer Engineering McMaster

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

General Properties of Radiation Detectors Supplements

General Properties of Radiation Detectors Supplements Phys. 649: Nuclear Techniques Physics Departent Yarouk University Chapter 4: General Properties of Radiation Detectors Suppleents Dr. Nidal M. Ershaidat Overview Phys. 649: Nuclear Techniques Physics Departent

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Moment of Inertia. Terminology. Definitions Moment of inertia of a body with mass, m, about the x axis: Transfer Theorem - 1. ( )dm. = y 2 + z 2.

Moment of Inertia. Terminology. Definitions Moment of inertia of a body with mass, m, about the x axis: Transfer Theorem - 1. ( )dm. = y 2 + z 2. Terinology Moent of Inertia ME 202 Moent of inertia (MOI) = second ass oent Instead of ultiplying ass by distance to the first power (which gives the first ass oent), we ultiply it by distance to the second

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Efficient Filter Banks And Interpolators

Efficient Filter Banks And Interpolators Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Generalized Queries on Probabilistic Context-Free Grammars

Generalized Queries on Probabilistic Context-Free Grammars IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998 1 Generalized Queries on Probabilistic Context-Free Graars David V. Pynadath and Michael P. Wellan Abstract

More information

Compressive Sensing Over Networks

Compressive Sensing Over Networks Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Genetic Quantum Algorithm and its Application to Combinatorial Optimization Problem

Genetic Quantum Algorithm and its Application to Combinatorial Optimization Problem Genetic Quantu Algorith and its Application to Cobinatorial Optiization Proble Kuk-Hyun Han Dept. of Electrical Engineering, KAIST, 373-, Kusong-dong Yusong-gu Taejon, 305-70, Republic of Korea khhan@vivaldi.kaist.ac.kr

More information

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction TWO-STGE SMPLE DESIGN WITH SMLL CLUSTERS Robert G. Clark and David G. Steel School of Matheatics and pplied Statistics, University of Wollongong, NSW 5 ustralia. (robert.clark@abs.gov.au) Key Words: saple

More information

ma x = -bv x + F rod.

ma x = -bv x + F rod. Notes on Dynaical Systes Dynaics is the study of change. The priary ingredients of a dynaical syste are its state and its rule of change (also soeties called the dynaic). Dynaical systes can be continuous

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Lecture #8-3 Oscillations, Simple Harmonic Motion

Lecture #8-3 Oscillations, Simple Harmonic Motion Lecture #8-3 Oscillations Siple Haronic Motion So far we have considered two basic types of otion: translation and rotation. But these are not the only two types of otion we can observe in every day life.

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels This full text paper was peer reviewed at the direction of IEEE Counications Society subject atter experts for publication in the IEEE ICC 9 proceedings Ipact of Iperfect Channel State Inforation on ARQ

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

DESIGN OF THE DIE PROFILE FOR THE INCREMENTAL RADIAL FORGING PROCESS *

DESIGN OF THE DIE PROFILE FOR THE INCREMENTAL RADIAL FORGING PROCESS * IJST, Transactions of Mechanical Engineering, Vol. 39, No. M1, pp 89-100 Printed in The Islaic Republic of Iran, 2015 Shira University DESIGN OF THE DIE PROFILE FOR THE INCREMENTAL RADIAL FORGING PROCESS

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Inference in the Presence of Likelihood Monotonicity for Polytomous and Logistic Regression

Inference in the Presence of Likelihood Monotonicity for Polytomous and Logistic Regression Advances in Pure Matheatics, 206, 6, 33-34 Published Online April 206 in SciRes. http://www.scirp.org/journal/ap http://dx.doi.org/0.4236/ap.206.65024 Inference in the Presence of Likelihood Monotonicity

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

The Wilson Model of Cortical Neurons Richard B. Wells

The Wilson Model of Cortical Neurons Richard B. Wells The Wilson Model of Cortical Neurons Richard B. Wells I. Refineents on the odgkin-uxley Model The years since odgkin s and uxley s pioneering work have produced a nuber of derivative odgkin-uxley-like

More information

Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Damping Characteristics

Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Damping Characteristics Copyright c 2007 Tech Science Press CMES, vol.22, no.2, pp.129-149, 2007 Envelope frequency Response Function Analysis of Mechanical Structures with Uncertain Modal Daping Characteristics D. Moens 1, M.

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

On the Maximum Likelihood Estimation of Weibull Distribution with Lifetime Data of Hard Disk Drives

On the Maximum Likelihood Estimation of Weibull Distribution with Lifetime Data of Hard Disk Drives 314 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'17 On the Maxiu Likelihood Estiation of Weibull Distribution with Lifetie Data of Hard Disk Drives Daiki Koizui Departent of Inforation and Manageent

More information

ANALYSIS ON RESPONSE OF DYNAMIC SYSTEMS TO PULSE SEQUENCES EXCITATION

ANALYSIS ON RESPONSE OF DYNAMIC SYSTEMS TO PULSE SEQUENCES EXCITATION The 4 th World Conference on Earthquake Engineering October -7, 8, Beijing, China ANALYSIS ON RESPONSE OF DYNAMIC SYSTEMS TO PULSE SEQUENCES EXCITATION S. Li C.H. Zhai L.L. Xie Ph. D. Student, School of

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

A Quantum Observable for the Graph Isomorphism Problem

A Quantum Observable for the Graph Isomorphism Problem A Quantu Observable for the Graph Isoorphis Proble Mark Ettinger Los Alaos National Laboratory Peter Høyer BRICS Abstract Suppose we are given two graphs on n vertices. We define an observable in the Hilbert

More information

Fundamentals of Image Compression

Fundamentals of Image Compression Fundaentals of Iage Copression Iage Copression reduce the size of iage data file while retaining necessary inforation Original uncopressed Iage Copression (encoding) 01101 Decopression (decoding) Copressed

More information

Maximum Entropy Interval Aggregations

Maximum Entropy Interval Aggregations Maxiu Entropy Interval Aggregations Ferdinando Cicalese Università di Verona, Verona, Italy Eail: cclfdn@univr.it Ugo Vaccaro Università di Salerno, Salerno, Italy Eail: uvaccaro@unisa.it arxiv:1805.05375v1

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual

On the Analysis of the Quantum-inspired Evolutionary Algorithm with a Single Individual 6 IEEE Congress on Evolutionary Coputation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-1, 6 On the Analysis of the Quantu-inspired Evolutionary Algorith with a Single Individual

More information

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Constant-Space String-Matching. in Sublinear Average Time. (Extended Abstract) Wojciech Rytter z. Warsaw University. and. University of Liverpool

Constant-Space String-Matching. in Sublinear Average Time. (Extended Abstract) Wojciech Rytter z. Warsaw University. and. University of Liverpool Constant-Space String-Matching in Sublinear Average Tie (Extended Abstract) Maxie Crocheore Universite de Marne-la-Vallee Leszek Gasieniec y Max-Planck Institut fur Inforatik Wojciech Rytter z Warsaw University

More information