Determining The Degree of Generalization Using An Incremental Learning Algorithm

Size: px
Start display at page:

Download "Determining The Degree of Generalization Using An Incremental Learning Algorithm"

Transcription

1 Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie Maur K. Sundareshan Eectrica and Computer Engineering Department, The University of Arizona 123 East Speedway, Tucson, AZ , USA Abstract. Any Learning Machine (LM) trained with exampes poses the same probem: how to determine whether the LM has achieved an acceptabe eve of generaization or not. This work presents a training method that uses the data set in an incrementa manner such that it is possibe to determine when the behavior dispayed by the LM during the earning stage truthfuy represents its future behavior when confronted by unseen data sampes. The method uses the set of sampes in an efficient way, which aows discarding a those sampes not reay needed for the training process. The new training procedure, which wi be caed Incrementa Training Agorithm, is based on a theoretica resut that is proven using recent deveopments in statistica earning theory. A key aspect of this anaysis invoves identification of three distinct stages through which the earning process normay proceeds, which in turn can be transated into a systematic procedure for determining the generaization eve achieved during training. It must be emphasized that the presented agorithm is genera and independent of the architecture of the LM and the specific training agorithm used. Hence it is appicabe to a broad cass of supervised earning probems and not restricted to the exampe presented in this work. 1 Introduction This paper focuses on an incrementa earning agorithm devised to determine whether a trained Learning Machine (LM) has reached an acceptabe eve of generaization or not using the set of sampes in an efficient manner, just from an examination of the earning behavior of the LM. The issue about generaization has been a main concern since earning probems started to be studied. Thus, it is not a surprise that the probem was addressed when the earning automaton concept [12, 17, 6, 11] was born in the eary sixties. Because the question about generaization bears direct impact on the adaptation process, it constitutes a core question of the earning automata probem. Nevertheess, due to the stochastic nature of these machines it has proven difficut to understand the generaization process in the context of the earning automaton, and, therefore, difficut to optimay taior the training process of these machines in order to improve their generaization capabiity. Thus, in terms of the anaysis of its generaization capacity, the earning automaton remains argey a heuristic method. During the ast

2 2 years, the research community has studied different aspects of the generaization probem in a broader cass of LMs [1, 18, 19, 2, 4, 3]. Whie a these works have attempted to tacke different aspects of the probem, deveopment of systematic procedures for determining when a generic LM has attained an acceptabe eve of generaization has proven very difficut to attain. It is important to keep in mind that the goa in any training procedure is to obtain a reasonabe eve of generaization using a minimum of sampes and computationa resources. An important exception to this is the recent understanding gained on the support vector machine [14, 15, 16, 8]. The usefuness of the support vector machine method has been proven by a cadre of practica agorithms and theoretica resuts that aow one to ensure when a desired eve of generaization has been reached and to determine which sampes are essentia for the training process, a.k.a. to find the support vectors. Even though this architecture works as a universa function approximator, and therefore it is the basis for a very important and usefu cass of LMs, these resuts ony appy for this specific architecture and they have not been generaized for arbitrary architectures. In Summary, there is sti no theory or organized methods that provide practica answers to how to determine the degree of generaization with an efficient usage of the set of sampes in the case of a generic LM. This work presents an incrementa earning method that under certain sufficient conditions permits to answer the question of when a good degree of generaization has been attained whie keeping the usage of sampes to a minimum. The main advantage of this method is that it ony requires studying some key aspects underying the earning behavior of the LM. The paper starts with an introduction to statistica earning theory [14, 15, 16], continues presenting the new earning scheme, anayticay proves why the presented agorithm shoud work, tests everything with an experiment, and ends with a discussion of the resuts. 2 Statistica Learning Theory The LM probem, as defined in this work, consists in finding? = arg min E I ( ), where E I ( ) = Z dxfx(x)(y ^y) 2 (1) is caed the intrinsic error, fx(x) is a density function, y = g(x) is a reference function, and ^y = ^g(x; ) is an estimation of the reference function defined by the parameter vector. However, what makes the probem compicated is that fx(x) is unknown and, therefore, it is not possibe to find?. Instead, a set of independent and identicay distributed data sampes y i = g(x i ) is avaiabe, such that it is ony possibe to find = arg min E E ( ;), where E E ( ;)= 1 X (y i ^y i ) 2 (2) i= is the so caed empirica error. The reader is referred to the recent texts [14, 15, 16], which provide an exceent treatise on the basic concepts of statistica earning theory. Given the previous formaism, it is cear that in order to measure the degree of generaization, i.e. measure the intrinsic error, it is necessary to ensure that the earning process wi produce a sequence of as is increased that converges to?, i.e. it is necessary to ensure the consistency of the procedure. If this is not achieved, there is no guarantee that the earning procedure wi eventuay produce?. Given that the consistency stage is reached, then the earning procedure associated with the LM wi necessariy produce a corresponding

3 Error E ( ) I θ Figure 1: Intrinsic and empirica error vs. number of sampes. The upper curve represents the E I ( ) and the ower curve E E ( ;), both with = arg min E E ( ;) empirica error that tends to the intrinsic error, and thus it becomes possibe to measure the intrinsic error. Given a LM, Fig. 1 depicts a typica evoution of these two error measures, i.e. intrinsic and empirica errors, evauated at = = arg min E E ( ;) as the set of sampes grows [14, 15, 16]. It can be seen that E E ( ;) is aways ess than E I ( ). The reason is that whie minimizes E E ( ;), which takes into account a set of data sampes (thus providing constraints for estabishing a match), does not minimize E I ( ), which considers the same data sampes as we as a other possibe ones. Ony as! 1 is that the both error measures converge to the same vaue when evauated for =. Fig. 1 shows that the behavior of E E ( ;) can be divided into three different stages: an initia one that corresponds to a ow number of data points in the set of sampes, during which the LM memorizes each of the events such that E E ( ;) can be made arbitrariy cose to zero (the LM shatters the probem space), a second stage, associated to an increased number of sampes, where the LM no onger can memorize the sampes and E E ( ;) starts to grow and move towards E I ( ) (the data sampes start to drive the LM towards the desired soution), and finay the third stage, where E E ( ;) is cose to E I ( ) and the earning process is finay consistent (minimization of the intrinsic error is now possibe). One of the main resuts of statistica earning theory [13, 14, 15, 16] states that when A» E I ( );E E ( ;)» B, then for any positive ff P ρ sup je I ( ) E E ( ;)j >ff ff» min ρ 1; 4 exp h(n h +1) (ff 1 ) 2 A ff which bounds the probabiity of worst, i.e. supremum over, divergence between E I ( ) and E E ( ;) [14, 15, 16]. The constant h is caed the VC dimension of the LM. The exponent of the exponentia function on the right side of Eq. 3 has two terms. The first one grows ogarithmicay in, and the second one decreases ineary in. What is important to notice is that if h<1, the second term takes over as grows and drives the exponentia to zero. The condition expressed in Eq. 3 confirms that the convergence process is divided into the three different stages mentioned before: a first one where the probabiity of divergence keeps cose to one (memorization stage), a second one that signas the onset of the exponentia effect (transition stage), and a fina stage where the probabiity of divergence between the two errors is cose to and the LM is in a position where it can generaize (consistency stage). It is important to stress that this resut states that a LM with a finite VC dimension wi achieve its optima configuration in the sense that it wi converge towards producing the minimum intrinsic error E I ( ) as the number of sampes grows, but it does not guarantee that the intrinsic error wi be zero. In other words, this resut provides conditions that estabish (3)

4 under which conditions a LM wi do its best, but it does not ensure that the LM wi do the best, i.e. achieve an intrinsic error equa to zero. In genera terms, finiteness of the VC-dimension provides the necessary and sufficient conditions for distribution independent consistency, i.e. convergence of the empirica error to the intrinsic error, which is a necessary step in order to aow the LM to find a combination of parameters that minimizes the intrinsic error. This resut is the cornerstone of statistica earning theory and its strength rests on being a distribution independent resut that guarantees exponentia convergence even in a worst case scenario. 3 The Incrementa Learning Agorithm Consider a student that has to earn a certain topic. If this is an efficient student, he certainy does not go to the ibrary, check out a the books that contain materia reated to the topic, and study a of them. Instead, the student first gets a primer on the subject, reads it, and soves the probems contained in it. Then, he checks out another book, ignores a the materia he has aready mastered and focuses on earning whatever new materia there is in the new book. He continues doing so unti he cannot find new information or probems he does not know how to hande. It is then, and ony then, that the student can be sure that he has mastered the topic. Two things that can be observed about the earning process used by the student are: 1. The student focuses on earning ony that information that brings up new aspects of the probem to his knowedge, and ignores the rest. 2. A tetae sign that the student is becoming coser to mastering the topic is the increasing difficuty in finding new materia. A earning method that is inspired by the exampe cited above is described in the fow diagram shown in Fig. 2. This agorithm wi be termed incrementa earning agorithm in future discussion. In the incrementa earning agorithm, the term training event refers to the execution of a the steps in the agorithm from the moment the empirica error E E ( ;) > ffi and the agorithm branches towards the Train LM step, unti the empirica error compies with E E ( ;) < ffi and the agorithm branches towards the Set = step. In this incrementa earning agorithm, the threshod ffi represents the bound on the intrinsic error that is deemed acceptabe to the user. One may note some simiarities between the incrementa earning agorithm and the cassic perceptron earning rue [5]. However, there is an important difference between them: in the perceptron earning rue each training event ony uses the sampe that made the system fai. In the perceptron earning rue there is no notion of a training set at a and training is done ony with the ast sampe that was processed by the system. On the other hand, the incrementa earning rue presented above uses a training set composed of a the sampes previousy processed by the system. The anaysis that foows proves that this seemingy minor difference is very important. The foowing theorem proves that under certain conditions the incrementa earning agorithm behaves ike the efficient student in the motivating exampe discussed above: Theorem 1. In every LM trained with the incrementa earning agorithm, right after the training event that started when the th sampe was examined ends ρ h(n 2 P fe I ( ) >ffig»min 1; 4 h +1) A ff (4) (ffi E E ( ;) 1 ) 2

5 Start Read training data with m sampes and threshod δ Create set of sampes by extracting sampes < m Train LM No E E( θ, ) < δ Yes Set = +1 Increment training set by adding new sampe E E( θ, ) > δ Yes No Yes < m No Return LM Figure 2: Fow diagram of the incrementa earning agorithm.

6 where is the vector of parameters obtained at the end of the training event. Proof: As expained before (see Eq. 3), when A» E I ( );E E ( ;)» B, then for any positive ff ρ ff ρ 1 h(n 2 P sup je I ( ) E E ( ;)j >ff» min 1; 4 h +1) (ff 1 ) 2 A (B A) ff (5) 2 If the previous expression is evauated for an arbitrary, then ρ h(n 2 P fje I ( ) E E ( ;)j >ffg»min 1; 4 h +1) Then, it is aso true that P fe I ( ) E E ( ;) >ffg»min Rearranging the terms in the eft side P fe I ( ) >ff+ E E ( ;)g»min ρ ρ 1; 4 exp 1; 4 exp h(n h +1) h(n h +1) (ff 1 ) 2 (ff 1 ) 2 (ff 1 ) 2 A ff A ff A ff (6) (7) (8) In the incrementa earning agorithm, right after a training event finishes ffi > E E ( ;). Therefore, it is possibe to define ff = ffi E E ( ;). Repacing into the previous expression ρ h(n 2 P fe I ( ) >ffig»min 1; 4 h +1) A ff (9) 2 (ffi E E ( ;) 1 ) 2 This theorem proves that if a LM with h < 1 is trained with the incrementa earning agorithm, right after a training event finishes the probabiity that the intrinsic error is above the threshod ffi converges to zero exponentiay after a certain number of sampes has been processed. Because the empirica error is bounded by a shrinking confidence interva centered on the intrinsic error (see Eq. 3), the exponentia behavior ends up infuencing the empirica error too. Therefore, the probabiity of a training event getting triggered wi exhibit the three different stages mentioned before: 1. A first one that corresponds to a ow number of sampes where the probabiity of divergence between the empirica error E E ( ;) and the intrinsic error E I ( ;) is one, and the probabiity that E E ( ;) >ffiis aso cose to 1. Therefore, the probabiity that a training event gets triggered is cose to A second one, associated to an increased number of sampes, where the probabiity that E E ( ;) > ffi starts to decrease, owering the probabiity that a training event gets triggered. 3. A ast stage where the earning process has become consistent and the empirica error behaves ike the intrinsic error. It is during this stage that the probabiity that E E ( ;) <ffi converges to 1 and no more training events are triggered.

7 The importance of the previous resut rests on the fact that if no more training events are triggered, i.e. the empirica error keeps beow ffi, it is possibe to say that the earning process has become consistent and that it has effectivey found a parameter combination that produces an intrinsic error ess than ffi. Once this can be assessed it is possibe to stop the training process whie having a high certainty that the intrinsic error of the LM wi keep beow ffi for the future sampes that the LM may encounter within the data set, i.e. the LM has achieved a desired eve of generaization. Once this point is reached, because the LM has found the parameters that produce an intrinsic error beow ffi, there is no need to continue testing and ooking for exampes that make the system fai and the rest of the set of sampes can be discarded. 4 Experiments In this section the resuts of an experiment that empoyed a neura network as the architecture for the LM wi be described. It must be emphasized that the architecture seection was ony for iustrative purposes and the present agorithm is appicabe to any other architecture that may be seected for configuring the LM. A mutiayer perceptron with 2 input neurons, 1 hidden ayer with 5 neurons, another hidden ayer with 25 neurons, and 1 output neuron p x 2 + y 2 =(12ß was seected as LM, and used to earn a 2D sinc z = sin 6ß x 2 + y 2 ). Initiaization of this neura network was performed using a scheme recommended in [7], and the network was trained with the RPROP agorithm [9, 1], which is one of the agorithms that is currenty impemented in the Matab Neura Networks Package. Since our interest in this work is to demonstrate the abiity of the present incrementa earning agorithm in testing the generaization eve, the specific training procedure seected for the neura network architecture is ony iustrative. The set of sampes used for training was obtained by randomy samping within the interva [ :5; :5] according to a uniform density function in order to obtain x and y, and generating the corresponding outputs z using the 2D sinc function. In order to obtain an estimate of the average behavior of the LM, 2 different sets of initia conditions and 2 different sets of sampes were empoyed. The LM was trained using 2 different combinations of sets of initia conditions and sets of sampes, making sure that no set of initia conditions and no set of sampes were used more than once. Training was done using the incrementa earning agorithm presented above by setting an intrinsic error threshod ffi =1 4 (set arbitrariy) and = 1 (determined experimentay). The reference 2D sinc used in the training process is shown in Fig. 3. A 2D sinc generated after one of the training runs is shown in Fig. 4. The number of sampes examined by the incrementa earning agorithm between individua training events is shown Fig. 5. The estimated probabiity of a training event getting triggered is shown in Fig. 6. This probabiity is cacuated by dividing the number of times that the k th training event happened by the number of different sets of initia conditions (which in this experiment is 2). Both pots ceary show that ess than 3 training events sufficed to make a 2 training runs reach the desired consistency stage. 5 Discussion The experiment of earning the 2D sinc function outined above demonstrated that the probabiity of a training event getting triggered exhibits three stages as expected: a first one where training events foow continuousy, one after the other, a transition zone where the probabiity of triggering a training event starts to decrease, and the ast one, where the probabiity that the empirica error exceeds the defined threshod converges to zero. p

8 Input 1 1 Output Input 1 1 Output Figure 3: Reference 2D sinc used in the training process. Figure 4: Output generated by the LM after training (2D sinc) Number of Sampes Used in Training Event Estimated Probabiity of Training Event Happenning Training Event Training Event Figure 5: Number of sampes skipped as a function of the number of training events (2D sinc). Figure 6: Estimated probabiity of triggering a training event as a function of the training event number (2D sinc).

9 Whie the above experiment iustrates a successfu appication of the present agorithm, severa other experiments that we have conducted indicate that under some conditions the incrementa earning method previousy described may not show signs of reaching the consistency stage, i.e. the third stage in the evoution of the earning process. There are two reasons that expain this behavior: 1. The LM keeps memorizing the data, the number of eements in the set of sampes increases constanty, and the probabiity that a training event happens keeps high. This happens when there are not enough data sampes or the VC dimension of the LM is not finite. 2. The minimum vaue of the intrinsic error can not compy with E I ( ) < ffi. In this case the LM starts a training event but never finishes it. This can happen if the ffi is too ow or if the initia conditions, earning procedure, or stopping conditions impede the LM from reaching empirica error vaues beow those it shoud reach. This happens when the LM fas into a oca minima and it is not abe to escape out of it. Whie the first cause is reativey simpe to remedy (it suffices to use more sampes or a different architecture), the second is more compex because it is associated to probems not we resoved for arbitrary LMs. Even though higher vaues of ffi coud be used, there is no sure way of avoiding the effects of the starting conditions, the earning procedure, or the stopping rue when it comes to getting stuck at a oca minimum. 6 Concusions The method presented in this paper provides with a practica way of determining whether a training process has attained the consistency stage or not. As a consequence of this, it is possibe to determine if the training process, which is focused on minimizing the empirica error, is reay minimizing the intrinsic error or not. Therefore it is possibe to estabish whether an LM has attained an acceptabe degree of generaization or not. Once this is determined to be true it is possibe to stop the training process and safey ignore a the other data points in the set of sampes. Aso, thanks to the theorem proven in this work it is possibe to say that the behavior seen in the experiments shoud aso be observed in any LM. Therefore, the present resuts hod for any architecture and make the incrementa earning agorithm a vaid approach for determining the degree of generaization in a genera setting using the set of sampes in an efficient manner. A particuary desirabe aspect of this method is that whether the LM has reached a good eve of generaization or not can be deduced from its earning behavior and there is no need for using anaytica methods or probe sets in order to test this. There is no need for finding the VC dimension of the LM in order to check if it has generaized or determining the number of sampes needed to do so: if the probabiity that a training event gets triggered goes to zero, then the system has aready reached the consistency stage and the empirica error is a good measure of its eve of generaization. 7 Acknowedgements Specia thanks to Dr. Mark Neifed for his comments on how to improve this work.

10 References [1] Baum, E.B., Neura Net Agorithms That Learn in Poynomia Time from Exampes and Queries, IEEE Transaction on Neura Networks 2(1) (1991) [2] Cachin, C., Pedagogica Pattern Seection Strategies, Neura Networks 7(1) (1994) [3] Catatepe, Z. and Abu-Mostafa, Y.S. and Magdon-Ismai, M., No Free Lunch for Eary Stopping, Neura Computation 11 (1999) [4] Franco, L. and Cannas, S.A., Generaization and Seection of Sampes in Feedforward Neura Networks, Neura Computation 12 (2) [5] Haykin, S.S., Neura Networks: A Comprehensive Foundation, Prentice Ha (1998). [6] Narendra, K. and Thathachar, M.A.L., Learning Automata: An Introduction, Prentice Ha (1989). [7] Nguyen, D. and Widrow, B., Improving the Learning Speed of 2-Layer Neura Networks by Choosing Initia Vaues of The Adaptive Weights, Proceedings of the IJCNN (199). [8] Raaivoa, L. and d Aché-Buc, F., Incrementa Support Vector Machine Learning: A Loca Approach, Proceedings of the ICANN, Vienna, Austria (21). [9] Riedmier, M. and Braun, H., A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Agorithm, Proceedings IEEE Internationa Conference on Neura Networks, San Francisco, USA (1993). [1] Riedmier, M., RPROP Description and Impementation Detais, University of Karsruhe (1994). [11] Sundareshan, M.K. and Condarcure, T.A., Recurrent Neura-Network Training by a Learning Automaton Approach for Trajectory Learning and Contro System Design, IEEE Transactions on Neura Networks 9(3) (1998) [12] Tsetin, M., On The Behavior of Finite Automata in Random Media, Automation and Remote Contro 22 (1961) [13] Vapnik, V. and Levin, E. and Le Cun, Y., Measuring The VC-Dimension of a Learning Machine, Neura Computation, 6 (1994) [14] Vapnik, V.N., The Nature of Statistica Learning Theory, Springer (1995). [15] Vapnik, V.N., Statistica Learning Theory, John Wiey and Sons (1998). [16] Vapnik, V.N., An Overview of Statistica Learning Theory, IEEE Transactions on Neura Networks 1(5) (1999) [17] Varshavskii, V.I. and Vorontsova, I.P., On The Behavior of Stochastic Automata with Variabe Structure, Automation and Remote Contro 24 (1963) [18] Zegers, P., Reconocimiento de Voz Utiizando Redes Neuronaes, Engineer Thesis, Pontificia Universidad Catóica de Chie, Chie (1992). [19] Zegers, P. and Sundareshan, M.K., Optima Taioring of Trajectories, Growing Training Sets, and Recurrent Networks for Spoken Word Recognition, Proceedings of the ICNN, Anchorage, USA (1998).

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

FOURIER SERIES ON ANY INTERVAL

FOURIER SERIES ON ANY INTERVAL FOURIER SERIES ON ANY INTERVAL Overview We have spent considerabe time earning how to compute Fourier series for functions that have a period of 2p on the interva (-p,p). We have aso seen how Fourier series

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

HYDROGEN ATOM SELECTION RULES TRANSITION RATES

HYDROGEN ATOM SELECTION RULES TRANSITION RATES DOING PHYSICS WITH MATLAB QUANTUM PHYSICS Ian Cooper Schoo of Physics, University of Sydney ian.cooper@sydney.edu.au HYDROGEN ATOM SELECTION RULES TRANSITION RATES DOWNLOAD DIRECTORY FOR MATLAB SCRIPTS

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1 Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow

More information

International Journal of Mass Spectrometry

International Journal of Mass Spectrometry Internationa Journa of Mass Spectrometry 280 (2009) 179 183 Contents ists avaiabe at ScienceDirect Internationa Journa of Mass Spectrometry journa homepage: www.esevier.com/ocate/ijms Stark mixing by ion-rydberg

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

Unconditional security of differential phase shift quantum key distribution

Unconditional security of differential phase shift quantum key distribution Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice

More information

General Certificate of Education Advanced Level Examination June 2010

General Certificate of Education Advanced Level Examination June 2010 Genera Certificate of Education Advanced Leve Examination June 2010 Human Bioogy HBI6T/Q10/task Unit 6T A2 Investigative Skis Assignment Task Sheet The effect of using one or two eyes on the perception

More information

Some Measures for Asymmetry of Distributions

Some Measures for Asymmetry of Distributions Some Measures for Asymmetry of Distributions Georgi N. Boshnakov First version: 31 January 2006 Research Report No. 5, 2006, Probabiity and Statistics Group Schoo of Mathematics, The University of Manchester

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm

Asymptotic Properties of a Generalized Cross Entropy Optimization Algorithm 1 Asymptotic Properties of a Generaized Cross Entropy Optimization Agorithm Zijun Wu, Michae Koonko, Institute for Appied Stochastics and Operations Research, Caustha Technica University Abstract The discrete

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Evolutionary Product-Unit Neural Networks for Classification 1

Evolutionary Product-Unit Neural Networks for Classification 1 Evoutionary Product-Unit Neura Networs for Cassification F.. Martínez-Estudio, C. Hervás-Martínez, P. A. Gutiérrez Peña A. C. Martínez-Estudio and S. Ventura-Soto Department of Management and Quantitative

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract

Stochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract Stochastic Compement Anaysis of Muti-Server Threshod Queues with Hysteresis John C.S. Lui The Dept. of Computer Science & Engineering The Chinese University of Hong Kong Leana Goubchik Dept. of Computer

More information

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS FORECASTING TEECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODES Niesh Subhash naawade a, Mrs. Meenakshi Pawar b a SVERI's Coege of Engineering, Pandharpur. nieshsubhash15@gmai.com

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

Melodic contour estimation with B-spline models using a MDL criterion

Melodic contour estimation with B-spline models using a MDL criterion Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex

More information

The influence of temperature of photovoltaic modules on performance of solar power plant

The influence of temperature of photovoltaic modules on performance of solar power plant IOSR Journa of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vo. 05, Issue 04 (Apri. 2015), V1 PP 09-15 www.iosrjen.org The infuence of temperature of photovotaic modues on performance

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

Optimal Control of Assembly Systems with Multiple Stages and Multiple Demand Classes 1

Optimal Control of Assembly Systems with Multiple Stages and Multiple Demand Classes 1 Optima Contro of Assemby Systems with Mutipe Stages and Mutipe Demand Casses Saif Benjaafar Mohsen EHafsi 2 Chung-Yee Lee 3 Weihua Zhou 3 Industria & Systems Engineering, Department of Mechanica Engineering,

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

A Statistical Framework for Real-time Event Detection in Power Systems

A Statistical Framework for Real-time Event Detection in Power Systems 1 A Statistica Framework for Rea-time Event Detection in Power Systems Noan Uhrich, Tim Christman, Phiip Swisher, and Xichen Jiang Abstract A quickest change detection (QCD) agorithm is appied to the probem

More information

Safety Evaluation Model of Chemical Logistics Park Operation Based on Back Propagation Neural Network

Safety Evaluation Model of Chemical Logistics Park Operation Based on Back Propagation Neural Network 1513 A pubication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 6, 017 Guest Editors: Fei Song, Haibo Wang, Fang He Copyright 017, AIDIC Servizi S.r.. ISBN 978-88-95608-60-0; ISSN 83-916 The Itaian Association

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Formulas for Angular-Momentum Barrier Factors Version II

Formulas for Angular-Momentum Barrier Factors Version II BNL PREPRINT BNL-QGS-06-101 brfactor1.tex Formuas for Anguar-Momentum Barrier Factors Version II S. U. Chung Physics Department, Brookhaven Nationa Laboratory, Upton, NY 11973 March 19, 2015 abstract A

More information

David Eigen. MA112 Final Paper. May 10, 2002

David Eigen. MA112 Final Paper. May 10, 2002 David Eigen MA112 Fina Paper May 1, 22 The Schrodinger equation describes the position of an eectron as a wave. The wave function Ψ(t, x is interpreted as a probabiity density for the position of the eectron.

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

17 Lecture 17: Recombination and Dark Matter Production

17 Lecture 17: Recombination and Dark Matter Production PYS 652: Astrophysics 88 17 Lecture 17: Recombination and Dark Matter Production New ideas pass through three periods: It can t be done. It probaby can be done, but it s not worth doing. I knew it was

More information

1D Heat Propagation Problems

1D Heat Propagation Problems Chapter 1 1D Heat Propagation Probems If the ambient space of the heat conduction has ony one dimension, the Fourier equation reduces to the foowing for an homogeneous body cρ T t = T λ 2 + Q, 1.1) x2

More information

Appendix A: MATLAB commands for neural networks

Appendix A: MATLAB commands for neural networks Appendix A: MATLAB commands for neura networks 132 Appendix A: MATLAB commands for neura networks p=importdata('pn.xs'); t=importdata('tn.xs'); [pn,meanp,stdp,tn,meant,stdt]=prestd(p,t); for m=1:10 net=newff(minmax(pn),[m,1],{'tansig','purein'},'trainm');

More information

On the evaluation of saving-consumption plans

On the evaluation of saving-consumption plans On the evauation of saving-consumption pans Steven Vanduffe Jan Dhaene Marc Goovaerts Juy 13, 2004 Abstract Knowedge of the distribution function of the stochasticay compounded vaue of a series of future

More information

EXPERIMENT 5 MOLAR CONDUCTIVITIES OF AQUEOUS ELECTROLYTES

EXPERIMENT 5 MOLAR CONDUCTIVITIES OF AQUEOUS ELECTROLYTES EXPERIMENT 5 MOLR CONDUCTIVITIES OF QUEOUS ELECTROLYTES Objective: To determine the conductivity of various acid and the dissociation constant, K for acetic acid a Theory. Eectrica conductivity in soutions

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes Maima and Minima 1. Introduction In this Section we anayse curves in the oca neighbourhood of a stationary point and, from this anaysis, deduce necessary conditions satisfied by oca maima and oca minima.

More information

High Spectral Resolution Infrared Radiance Modeling Using Optimal Spectral Sampling (OSS) Method

High Spectral Resolution Infrared Radiance Modeling Using Optimal Spectral Sampling (OSS) Method High Spectra Resoution Infrared Radiance Modeing Using Optima Spectra Samping (OSS) Method J.-L. Moncet and G. Uymin Background Optima Spectra Samping (OSS) method is a fast and accurate monochromatic

More information

Nonlinear Analysis of Spatial Trusses

Nonlinear Analysis of Spatial Trusses Noninear Anaysis of Spatia Trusses João Barrigó October 14 Abstract The present work addresses the noninear behavior of space trusses A formuation for geometrica noninear anaysis is presented, which incudes

More information

SydU STAT3014 (2015) Second semester Dr. J. Chan 18

SydU STAT3014 (2015) Second semester Dr. J. Chan 18 STAT3014/3914 Appied Stat.-Samping C-Stratified rand. sampe Stratified Random Samping.1 Introduction Description The popuation of size N is divided into mutuay excusive and exhaustive subpopuations caed

More information

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008

arxiv: v2 [cond-mat.stat-mech] 14 Nov 2008 Random Booean Networks Barbara Drosse Institute of Condensed Matter Physics, Darmstadt University of Technoogy, Hochschustraße 6, 64289 Darmstadt, Germany (Dated: June 27) arxiv:76.335v2 [cond-mat.stat-mech]

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS

LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL HARMONICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Eectromagnetism II October 7, 202 Prof. Aan Guth LECTURE NOTES 9 TRACELESS SYMMETRIC TENSOR APPROACH TO LEGENDRE POLYNOMIALS AND SPHERICAL

More information

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness

Schedulability Analysis of Deferrable Scheduling Algorithms for Maintaining Real-Time Data Freshness 1 Scheduabiity Anaysis of Deferrabe Scheduing Agorithms for Maintaining Rea-Time Data Freshness Song Han, Deji Chen, Ming Xiong, Kam-yiu Lam, Aoysius K. Mok, Krithi Ramamritham UT Austin, Emerson Process

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

MONTE CARLO SIMULATIONS

MONTE CARLO SIMULATIONS MONTE CARLO SIMULATIONS Current physics research 1) Theoretica 2) Experimenta 3) Computationa Monte Caro (MC) Method (1953) used to study 1) Discrete spin systems 2) Fuids 3) Poymers, membranes, soft matter

More information

THINKING IN PYRAMIDS

THINKING IN PYRAMIDS ECS 178 Course Notes THINKING IN PYRAMIDS Kenneth I. Joy Institute for Data Anaysis and Visuaization Department of Computer Science University of Caifornia, Davis Overview It is frequenty usefu to think

More information

Lecture 6: Moderately Large Deflection Theory of Beams

Lecture 6: Moderately Large Deflection Theory of Beams Structura Mechanics 2.8 Lecture 6 Semester Yr Lecture 6: Moderatey Large Defection Theory of Beams 6.1 Genera Formuation Compare to the cassica theory of beams with infinitesima deformation, the moderatey

More information

On the Goal Value of a Boolean Function

On the Goal Value of a Boolean Function On the Goa Vaue of a Booean Function Eric Bach Dept. of CS University of Wisconsin 1210 W. Dayton St. Madison, WI 53706 Lisa Heerstein Dept of CSE NYU Schoo of Engineering 2 Metrotech Center, 10th Foor

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

Applied Nuclear Physics (Fall 2006) Lecture 7 (10/2/06) Overview of Cross Section Calculation

Applied Nuclear Physics (Fall 2006) Lecture 7 (10/2/06) Overview of Cross Section Calculation 22.101 Appied Nucear Physics (Fa 2006) Lecture 7 (10/2/06) Overview of Cross Section Cacuation References P. Roman, Advanced Quantum Theory (Addison-Wesey, Reading, 1965), Chap 3. A. Foderaro, The Eements

More information

Active Learning & Experimental Design

Active Learning & Experimental Design Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

An approximate method for solving the inverse scattering problem with fixed-energy data

An approximate method for solving the inverse scattering problem with fixed-energy data J. Inv. I-Posed Probems, Vo. 7, No. 6, pp. 561 571 (1999) c VSP 1999 An approximate method for soving the inverse scattering probem with fixed-energy data A. G. Ramm and W. Scheid Received May 12, 1999

More information

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1654 March23, 1999

More information

Haar Decomposition and Reconstruction Algorithms

Haar Decomposition and Reconstruction Algorithms Jim Lambers MAT 773 Fa Semester 018-19 Lecture 15 and 16 Notes These notes correspond to Sections 4.3 and 4.4 in the text. Haar Decomposition and Reconstruction Agorithms Decomposition Suppose we approximate

More information

Section 6: Magnetostatics

Section 6: Magnetostatics agnetic fieds in matter Section 6: agnetostatics In the previous sections we assumed that the current density J is a known function of coordinates. In the presence of matter this is not aways true. The

More information

Identification of macro and micro parameters in solidification model

Identification of macro and micro parameters in solidification model BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vo. 55, No. 1, 27 Identification of macro and micro parameters in soidification mode B. MOCHNACKI 1 and E. MAJCHRZAK 2,1 1 Czestochowa University

More information

arxiv:hep-ph/ v1 15 Jan 2001

arxiv:hep-ph/ v1 15 Jan 2001 BOSE-EINSTEIN CORRELATIONS IN CASCADE PROCESSES AND NON-EXTENSIVE STATISTICS O.V.UTYUZH AND G.WILK The Andrzej So tan Institute for Nucear Studies; Hoża 69; 00-689 Warsaw, Poand E-mai: utyuzh@fuw.edu.p

More information

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law Gauss Law 1. Review on 1) Couomb s Law (charge and force) 2) Eectric Fied (fied and force) 2. Gauss s Law: connects charge and fied 3. Appications of Gauss s Law Couomb s Law and Eectric Fied Couomb s

More information

arxiv: v1 [math.ca] 6 Mar 2017

arxiv: v1 [math.ca] 6 Mar 2017 Indefinite Integras of Spherica Besse Functions MIT-CTP/487 arxiv:703.0648v [math.ca] 6 Mar 07 Joyon K. Boomfied,, Stephen H. P. Face,, and Zander Moss, Center for Theoretica Physics, Laboratory for Nucear

More information

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,

More information

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem 1 Maximum ikeihood decoding of treis codes in fading channes with no receiver CSI is a poynomia-compexity probem Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department

More information

Two-sample inference for normal mean vectors based on monotone missing data

Two-sample inference for normal mean vectors based on monotone missing data Journa of Mutivariate Anaysis 97 (006 6 76 wwweseviercom/ocate/jmva Two-sampe inference for norma mean vectors based on monotone missing data Jianqi Yu a, K Krishnamoorthy a,, Maruthy K Pannaa b a Department

More information

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997 Stochastic utomata etworks (S) - Modeing and Evauation Pauo Fernandes rigitte Pateau 2 May 29, 997 Institut ationa Poytechnique de Grenobe { IPG Ecoe ationae Superieure d'informatique et de Mathematiques

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

Legendre Polynomials - Lecture 8

Legendre Polynomials - Lecture 8 Legendre Poynomias - Lecture 8 Introduction In spherica coordinates the separation of variabes for the function of the poar ange resuts in Legendre s equation when the soution is independent of the azimutha

More information

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation Robust Sensitivity Anaysis for Linear Programming with Eipsoida Perturbation Ruotian Gao and Wenxun Xing Department of Mathematica Sciences Tsinghua University, Beijing, China, 100084 September 27, 2017

More information

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased PHYS 110B - HW #1 Fa 2005, Soutions by David Pace Equations referenced as Eq. # are from Griffiths Probem statements are paraphrased [1.] Probem 6.8 from Griffiths A ong cyinder has radius R and a magnetization

More information

Packet Fragmentation in Wi-Fi Ad Hoc Networks with Correlated Channel Failures

Packet Fragmentation in Wi-Fi Ad Hoc Networks with Correlated Channel Failures Packet Fragmentation in Wi-Fi Ad Hoc Networks with Correated Channe Faiures Andrey Lyakhov Vadimir Vishnevsky Institute for Information Transmission Probems of RAS B. Karetny 19, Moscow, 127994, Russia

More information

Data Mining Technology for Failure Prognostic of Avionics

Data Mining Technology for Failure Prognostic of Avionics IEEE Transactions on Aerospace and Eectronic Systems. Voume 38, #, pp.388-403, 00. Data Mining Technoogy for Faiure Prognostic of Avionics V.A. Skormin, Binghamton University, Binghamton, NY, 1390, USA

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Testing for the Existence of Clusters

Testing for the Existence of Clusters Testing for the Existence of Custers Caudio Fuentes and George Casea University of Forida November 13, 2008 Abstract The detection and determination of custers has been of specia interest, among researchers

More information

Characterization of Low-Temperature SU-8 Photoresist Processing for MEMS Applications

Characterization of Low-Temperature SU-8 Photoresist Processing for MEMS Applications Characterization of Low-Temperature SU-8 Photoresist Processing for MEMS Appications Sang Jeen Hong 1, Seungkeun Choi 2, Yoonsu Choi 3, Mark Aen 4*, and Gary S. May 5 Schoo of Eectrica and Computer Engineering

More information