Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Size: px
Start display at page:

Download "Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k"

Transcription

1 A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine We start by calculating the following integral, assuming 2 : { } ω 2 ω I exp ω R σ 2 σ ω { 2 exp ω } ω σ 2 2 { } ω exp ω ω 2 { ω exp ω 2 2 Changing variables y ω I σ2 σ σ2 σ 2 y 2 yiels, y 2 y 2 2 exp } ω exp {y}y exp {y}y exp {y}y { } 2 σ We thus conclue for the general case, { I 2σ2 2 exp } 2 σ 2 5 As for the Kulbac-Leibler Divergence, we use the chain formula for inepenent ranom variables, KL P D KL P σp, R R i P i 2 i e ω σ ω P, ω ω σ P, Proof: We prove that, Pr yω x < Pr yω x < y x ω ω The ranom variable 4 E x, y,, σ Z yω x, is a sum of inepenent zero-mean laplace istribute ranom variables, Z Laplace, σ x, each is equal in istribution to a ifference between two ii exponential ranom variables Therefore, Pr yω x < Pr A ω B < y x, where A, B Expλ an, λ λ x σ x,, 6 Without the loss of generality we assume that the coorinates of x are sorte, ie λ < λ 2 < λ Calculating the convolution for x x an z, f AA z z λ λ e λtz e λ t λ λ λ λ e λ z e λz Exploiting the structure of the resulting convolution, we convolve it with the lth ensity an get, f AA A l z λ λ λ l λm λ e λ z λ m λ e λz λ λ e λmz λ λ λ m λ λ m λ Performing convolution for all ensities yiels, A z ξ e λz for z, where we efine ξ ξ x λ n,n λ n λ The first term of the integral is given in 5, an the secon term is exactly the -imensional σ-weighte l -norm, therefore, 2 ω E, which completes the proof Similarly, we get the same result for f B z, yet it is efine for z From 6 we convolute the 4 Notice that if x the ranom variable ω x equals zero too, therefore we assume without loss of generality that x

2 Asaf Noy, Koby Crammer ifference an get, A B z minz, m,n m,n A f B ξ m e λmt m ξ m ξ e λ z eλmλ t λ m λ ξ m ξ λ m λ e λ z for ψ ψ x m We integrate to get the CDF, minz, ξ e λ zt ψ e λ z y x l cf yω x { ψ z ξ m ξ λ m λ ψ e λ z Finally, we efine α x ψ x λ x ξ sort x 3, α x ξ m ξ λ e λ y x y x ψ λ e λ y x y x < m ξ 2, an obtain for ξ ξ ξm ξ, m z ξm In particular, from the symmetry of A B z, we have for, that 2 Pr yω x < ω which conclues the proof C Proof of Theorem 4 α i, Proof: From the assumption that the ata is { linearly separable we conclue that the set y i x i, i,, m } is not empty Aitionally, the set is efine via linear constraints an thus convex The obective 7 is convex in σ as its secon erivative with respect to σ is σ 2 > The regularization term of 7 is convex in as the secon erivative of z exp z is always positive an well efine for all values of z see also Remar for a iscussion of this function for values z alpha feature inex Figure 6: Illustration of the cumulative sums, i α ix, for five -imensional vectors As for the loss term l y i x i, we use the following auxiliary lemma Lemma The following set of probability ensity functions over the reals { S f pf f C, fz fz, } an z, z 2, z 2 > z fz 2 < fz is close uner convolution, ie f, g S f g S Since the ranom variables ω,, ω are inepenent, the ensity f Zi z of the margin Z i y i ω xi, is obtaine by convoluting inepenent zero-mean Laplace istribute ranom variables y i ω i, x i, Since the -imensional Laplace pf is in S, it follows from Lemma by inuction that so is f Zi As a member of S, the positivity of the erivative f Z i z for z is conclue from Lemma Finally, we note that the integral of the ensity is l cf, the cumulative ensity function, Ex i, y i,, σ y i x i f Zi zz Thus, the secon erivative of Ex i, y i,, σ for positive values of the margin, equals to f Z i z for z, an hence positive Changing variables accoring to 6 completes the proof D Proof of Lemma Proof: Assume f, g S an enote by h f g The erivative of a convolution between two ifferentiable functions always exists, an equals to, f g z

3 f g z We compute for the convolution erivative, gt h z fz t gt gt fz t fz t gt gt fz t fz t gt fz t fz t, where the last equality follows the fact gt is an o function as a erivative of an even function Since f, g S, hz C ie continuously ifferentiable almost everywhere, an since h z is o, we have that hz is even Using the monotonicity property of f, g, ie z 2 > z fz 2 < fz, we get, gt fz t fz t signz fz t fz t gt Since f, g are pfs, the integral is always efine, an thus the sign of the erivative of h epens on the sign of its argument, an in particular it is an increasing function for z < an ecreasing for z >, yieling the thir property for h Thus, h S, as esire E Proof of Lemma 5 Rearranging, we get, exp cmη e, an we can conclue, σ exp cmη F Proof of Theorem 6 Proof: While the empirical loss term epens only on, an was prove to be strictly convex for examples that satisfies y i x i in theorem 4, the regularization term is optimize over both, σ Incorporating the optimal value for sigma from 8 into the obective yiels the following: F, σ e c lyx i i Differentiating the regularization term twice with respect to results in the following Hessian matrix, H e iagexp v v e, Proof: Setting an σ the obective becomes cmη Since the loss is non-negative we get that the minimizers satisfy, cmη σ e σ c i e ly i x i σ e σ e Substituting the optimal value of σ from 8 we get, cmη e e e for the -imensional vector v sign exp, an iagexp is a iagonal vector for which its ith elements equals exp i The Hessian H is a ifference of two positive semi-efinite matrices We upper boun the maximal eigenvalues of the secon term by its trace, inee, maxλ e v v e e 2 e <

4 Asaf Noy, Koby Crammer Thus, the minimal eigenvalue of H is boune from below by, an the Hessian of the sum of the obective an 2 2 has positive eigenvalues, therefore strictly convex For the secon part, we use 7, Corollary 723 stating the a iagonally-ominate matrix with non-negative iagonal values is PSD We next show that inee is a sufficient conition for the Hessian to be iagonally ominate It is straightforwar to verify that both conitions follows from the following set of inequalities, for all,,, e or equivalently, e e e e e > e, e > e, e > 7 Fixing the left-han-sie is ecompose to a sum of one variable convex functions We minimize it for each by taing the erivative an setting it to zero, yieling, sign e e 8 From here we conclue that 7 is satisfie if a for a scalar a that satisfy, ga 2e a ae a > The function ga is monotonically ecreasing an continuous, with g 3/e >, which completes the proof In fact, one can compute numerically an fin that a 46 satisfy ga, which leas to a slightly better constant than state in the theorem G Proof of Lemma 7 Proof: We first nee to compute l lin irectly, as α x is not efine on the stanar basis, which contains few elements of the same value, Pr e ω Pr ω Pr ω < 2σ e ω σ ω Thus, if we get the convex part Pr e ω 2 exp Otherwise, we boun Pr e ω with the linear extension an get 2 To conclue, for each element we get that, y± l linye 2 exp { } Taing the sum over an multiplying by 2 yiels the above regularization term H RobuCop Pseuo-coe Input: Training set S {x i, y i} m i, c >, Artifical set A {e, y :, y Y} Initialization: Loop o until convergence criterion met: Set: σ n exp { } Solve n arg min { S A ci l linyx i } { c xi, y i S for: c i x i, y i A Output:, σ I Proof of Lemma 8 2σ n Proof: Denote the change of the loss term of 2 by, Δ t D i e yixi t i i yixi D i e t t We start by bouning Δ t from below, then a to it the ifference of the regularization term, before an after the upate Bouning the improvement for a

5 single example, we get, Δ t,i c q t i D i e yixi t D i e yixi t D i e yixi t D i D i e yixi t e yix i, t D ie yixi t D i e yixi t D ie yixi t D i e yixi t By using z z for z < we get, q t i q t i e yix i, t e yix i, t t Convexity of the exponent, for every,, yiels, e yix i, t Summing over the examples, Δ t c c x i, e signyix i, x i, e t q t i x i, e signyix i, i i,y ix i, c i,y ix i, < c γ q t i x i, q t i x i, e t e e γ t t e t t aing the regularization terms completes the proof J Proof of Lemma 9 Proof: t Without loss of generality we assume that t γ e γ e >, an in aition we assume for the sae of contraiction that, t t < Differentiating the obective with respect for t t, an equating to zero yiels: { t t t e t c γ t e γ t e t Arranging the terms: cγ e t cγ e cγ t e t e t } t cγ e t e t t σ, the right han sie is assume to be strictly positive, an as for the left han sie: γ e t γ < γ t e e γ e t γ t e t t e t t γ e < e t t This is a contraiction, so we must have that t t The proof for the symmetric case follows similarly K Experiments- Data Details: Synthetic ata: We generate 4, vectors x i R 8 sample from a zero mean isotropic normal istribution x i N, I Labels were assigne by generating once per run ω R 8 at ranom an using: y i signω x i Each input x i training ata was then corrupte with probability p by aing to it a ranom vector sample from a zero mean isotropic Gaussian, ɛ i N, σi, with some positive stanar-eviation σ Each run was repeate 2 times, an results are average test-error over the 2 runs All boosting algorithms were run for, iterations, except for the RobuCoP algorithm which was execute until a convergence criterion was met, which often was about 2 rouns Vocal Joystic: For each problem, we pice three sets of size 2, each, for training, parameter tuning an testing Each example is a frame of spoen value escribe with 3 MFCC coefficients transforme into 27 features In orer to examine the robustness of ifferent algorithms, we contaminate % of the ata with an aitive zero-mean ii Gaussian noise, for ifferent values of the stanar-eviation σ

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

A. Exclusive KL View of the MLE

A. Exclusive KL View of the MLE A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

A Unified Theorem on SDP Rank Reduction

A Unified Theorem on SDP Rank Reduction A Unifie heorem on SDP Ran Reuction Anthony Man Cho So, Yinyu Ye, Jiawei Zhang November 9, 006 Abstract We consier the problem of fining a low ran approximate solution to a system of linear equations in

More information

Lecture 2: Correlated Topic Model

Lecture 2: Correlated Topic Model Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

All s Well That Ends Well: Supplementary Proofs

All s Well That Ends Well: Supplementary Proofs All s Well That Ens Well: Guarantee Resolution of Simultaneous Rigi Boy Impact 1:1 All s Well That Ens Well: Supplementary Proofs This ocument complements the paper All s Well That Ens Well: Guarantee

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/2014 1 Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

REAL ANALYSIS I HOMEWORK 5

REAL ANALYSIS I HOMEWORK 5 REAL ANALYSIS I HOMEWORK 5 CİHAN BAHRAN The questions are from Stein an Shakarchi s text, Chapter 3. 1. Suppose ϕ is an integrable function on R with R ϕ(x)x = 1. Let K δ(x) = δ ϕ(x/δ), δ > 0. (a) Prove

More information

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012

Free rotation of a rigid body 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012 Free rotation of a rigi boy 1 D. E. Soper 2 University of Oregon Physics 611, Theoretical Mechanics 5 November 2012 1 Introuction In this section, we escribe the motion of a rigi boy that is free to rotate

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

1 Math 285 Homework Problem List for S2016

1 Math 285 Homework Problem List for S2016 1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:

More information

Generalized Tractability for Multivariate Problems

Generalized Tractability for Multivariate Problems Generalize Tractability for Multivariate Problems Part II: Linear Tensor Prouct Problems, Linear Information, an Unrestricte Tractability Michael Gnewuch Department of Computer Science, University of Kiel,

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Self-normalized Martingale Tail Inequality

Self-normalized Martingale Tail Inequality Online-to-Confience-Set Conversions an Application to Sparse Stochastic Banits A Self-normalize Martingale Tail Inequality The self-normalize martingale tail inequality that we present here is the scalar-value

More information

LECTURE NOTES ON DVORETZKY S THEOREM

LECTURE NOTES ON DVORETZKY S THEOREM LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Lecture 10: October 30, 2017

Lecture 10: October 30, 2017 Information an Coing Theory Autumn 2017 Lecturer: Mahur Tulsiani Lecture 10: October 30, 2017 1 I-Projections an applications In this lecture, we will talk more about fining the istribution in a set Π

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations EECS 6B Designing Information Devices an Systems II Fall 07 Note 3 Secon Orer Differential Equations Secon orer ifferential equations appear everywhere in the real worl. In this note, we will walk through

More information

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Lagrangian and Hamiltonian Dynamics

Lagrangian and Hamiltonian Dynamics Lagrangian an Hamiltonian Dynamics Volker Perlick (Lancaster University) Lecture 1 The Passage from Newtonian to Lagrangian Dynamics (Cockcroft Institute, 22 February 2010) Subjects covere Lecture 2: Discussion

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Proof of Proposition 1

Proof of Proposition 1 A Proofs of Propositions,2,. Before e look at the MMD calculations in various cases, e prove the folloing useful characterization of MMD for translation invariant kernels like the Gaussian an Laplace kernels.

More information

Monotonicity for excited random walk in high dimensions

Monotonicity for excited random walk in high dimensions Monotonicity for excite ranom walk in high imensions Remco van er Hofsta Mark Holmes March, 2009 Abstract We prove that the rift θ, β) for excite ranom walk in imension is monotone in the excitement parameter

More information

A. Incorrect! The letter t does not appear in the expression of the given integral

A. Incorrect! The letter t does not appear in the expression of the given integral AP Physics C - Problem Drill 1: The Funamental Theorem of Calculus Question No. 1 of 1 Instruction: (1) Rea the problem statement an answer choices carefully () Work the problems on paper as neee (3) Question

More information

Levy Process and Infinitely Divisible Law

Levy Process and Infinitely Divisible Law Stat205B: Probability Theory (Spring 2003) Lecture: 26 Levy Process an Infinitely Divisible Law Lecturer: James W. Pitman Scribe: Bo Li boli@stat.berkeley.eu Levy Processes an Infinitely Divisible Law

More information

Capacity Analysis of MIMO Systems with Unknown Channel State Information

Capacity Analysis of MIMO Systems with Unknown Channel State Information Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

On combinatorial approaches to compressed sensing

On combinatorial approaches to compressed sensing On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

Evolutionary Stability of Pure-Strategy Equilibria in Finite Games

Evolutionary Stability of Pure-Strategy Equilibria in Finite Games GAMES AND ECONOMIC BEHAVIOR 2, 253265 997 ARTICLE NO GA970533 Evolutionary Stability of Pure-Strategy Equilibria in Finite Games E Somanathan* Emory Uniersity, Department of Economics, Atlanta, Georgia

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

12.11 Laplace s Equation in Cylindrical and

12.11 Laplace s Equation in Cylindrical and SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering

More information

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS MARK SCHACHNER Abstract. When consiere as an algebraic space, the set of arithmetic functions equippe with the operations of pointwise aition an

More information

P(x) = 1 + x n. (20.11) n n φ n(x) = exp(x) = lim φ (x) (20.8) Our first task for the chain rule is to find the derivative of the exponential

P(x) = 1 + x n. (20.11) n n φ n(x) = exp(x) = lim φ (x) (20.8) Our first task for the chain rule is to find the derivative of the exponential 20. Derivatives of compositions: the chain rule At the en of the last lecture we iscovere a nee for the erivative of a composition. In this lecture we show how to calculate it. Accoringly, let P have omain

More information

Equilibrium in Queues Under Unknown Service Times and Service Value

Equilibrium in Queues Under Unknown Service Times and Service Value University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

Homework 2 Solutions EM, Mixture Models, PCA, Dualitys

Homework 2 Solutions EM, Mixture Models, PCA, Dualitys Homewor Solutions EM, Mixture Moels, PCA, Dualitys CMU 0-75: Machine Learning Fall 05 http://www.cs.cmu.eu/~bapoczos/classes/ml075_05fall/ OUT: Oct 5, 05 DUE: Oct 9, 05, 0:0 AM An EM algorithm for a Mixture

More information

ELEC3114 Control Systems 1

ELEC3114 Control Systems 1 ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

Extreme Values by Resnick

Extreme Values by Resnick 1 Extreme Values by Resnick 1 Preliminaries 1.1 Uniform Convergence We will evelop the iea of something calle continuous convergence which will be useful to us later on. Denition 1. Let X an Y be metric

More information

SOLUTIONS for Homework #3

SOLUTIONS for Homework #3 SOLUTIONS for Hoework #3 1. In the potential of given for there is no unboun states. Boun states have positive energies E n labele by an integer n. For each energy level E, two syetrically locate classical

More information

Applications of the Wronskian to ordinary linear differential equations

Applications of the Wronskian to ordinary linear differential equations Physics 116C Fall 2011 Applications of the Wronskian to orinary linear ifferential equations Consier a of n continuous functions y i (x) [i = 1,2,3,...,n], each of which is ifferentiable at least n times.

More information

The Ehrenfest Theorems

The Ehrenfest Theorems The Ehrenfest Theorems Robert Gilmore Classical Preliminaries A classical system with n egrees of freeom is escribe by n secon orer orinary ifferential equations on the configuration space (n inepenent

More information

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2 High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity

More information

Fall 2016: Calculus I Final

Fall 2016: Calculus I Final Answer the questions in the spaces provie on the question sheets. If you run out of room for an answer, continue on the back of the page. NO calculators or other electronic evices, books or notes are allowe

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

LOGISTIC regression model is often used as the way

LOGISTIC regression model is often used as the way Vol:6, No:9, 22 Secon Orer Amissibilities in Multi-parameter Logistic Regression Moel Chie Obayashi, Hiekazu Tanaka, Yoshiji Takagi International Science Inex, Mathematical an Computational Sciences Vol:6,

More information

Global Solutions to the Coupled Chemotaxis-Fluid Equations

Global Solutions to the Coupled Chemotaxis-Fluid Equations Global Solutions to the Couple Chemotaxis-Flui Equations Renjun Duan Johann Raon Institute for Computational an Applie Mathematics Austrian Acaemy of Sciences Altenbergerstrasse 69, A-44 Linz, Austria

More information

Kinematics of Rotations: A Summary

Kinematics of Rotations: A Summary A Kinematics of Rotations: A Summary The purpose of this appenix is to outline proofs of some results in the realm of kinematics of rotations that were invoke in the preceing chapters. Further etails are

More information

Stable and compact finite difference schemes

Stable and compact finite difference schemes Center for Turbulence Research Annual Research Briefs 2006 2 Stable an compact finite ifference schemes By K. Mattsson, M. Svär AND M. Shoeybi. Motivation an objectives Compact secon erivatives have long

More information

Lecture 5. Symmetric Shearer s Lemma

Lecture 5. Symmetric Shearer s Lemma Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

1. At time t = 0, the wave function of a free particle moving in a one-dimension is given by, ψ(x,0) = N

1. At time t = 0, the wave function of a free particle moving in a one-dimension is given by, ψ(x,0) = N Physics 15 Solution Set Winter 018 1. At time t = 0, the wave function of a free particle moving in a one-imension is given by, ψ(x,0) = N where N an k 0 are real positive constants. + e k /k 0 e ikx k,

More information

arxiv: v2 [physics.data-an] 5 Jul 2012

arxiv: v2 [physics.data-an] 5 Jul 2012 Submitte to the Annals of Statistics OPTIMAL TAGET PLAE RECOVERY ROM OISY MAIOLD SAMPLES arxiv:.460v physics.ata-an] 5 Jul 0 By Daniel. Kaslovsky an rançois G. Meyer University of Colorao, Bouler Constructing

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

An extension of Alexandrov s theorem on second derivatives of convex functions

An extension of Alexandrov s theorem on second derivatives of convex functions Avances in Mathematics 228 (211 2258 2267 www.elsevier.com/locate/aim An extension of Alexanrov s theorem on secon erivatives of convex functions Joseph H.G. Fu 1 Department of Mathematics, University

More information

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread]

Witt#5: Around the integrality criterion 9.93 [version 1.1 (21 April 2013), not completed, not proofread] Witt vectors. Part 1 Michiel Hazewinkel Sienotes by Darij Grinberg Witt#5: Aroun the integrality criterion 9.93 [version 1.1 21 April 2013, not complete, not proofrea In [1, section 9.93, Hazewinkel states

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics UC Berkeley Department of Electrical Engineering an Computer Science Department of Statistics EECS 8B / STAT 4B Avance Topics in Statistical Learning Theory Solutions 3 Spring 9 Solution 3. For parti,

More information

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains Hyperbolic Systems of Equations Pose on Erroneous Curve Domains Jan Norström a, Samira Nikkar b a Department of Mathematics, Computational Mathematics, Linköping University, SE-58 83 Linköping, Sween (

More information

ENTROPY AND KINETIC FORMULATIONS OF CONSERVATION LAWS

ENTROPY AND KINETIC FORMULATIONS OF CONSERVATION LAWS ENTOPY AND KINETIC FOMULATIONS OF CONSEVATION LAWS YUZHOU ZOU Abstract. Entropy an kinetic formulations of conservation laws are introuce, in orer to create a well-pose theory. Existence an uniqueness

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

Monte Carlo Methods with Reduced Error

Monte Carlo Methods with Reduced Error Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for

More information

On a limit theorem for non-stationary branching processes.

On a limit theorem for non-stationary branching processes. On a limit theorem for non-stationary branching processes. TETSUYA HATTORI an HIROSHI WATANABE 0. Introuction. The purpose of this paper is to give a limit theorem for a certain class of iscrete-time multi-type

More information

Problem set 2: Solutions Math 207B, Winter 2016

Problem set 2: Solutions Math 207B, Winter 2016 Problem set : Solutions Math 07B, Winter 016 1. A particle of mass m with position x(t) at time t has potential energy V ( x) an kinetic energy T = 1 m x t. The action of the particle over times t t 1

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.

More information

Abstract A nonlinear partial differential equation of the following form is considered:

Abstract A nonlinear partial differential equation of the following form is considered: M P E J Mathematical Physics Electronic Journal ISSN 86-6655 Volume 2, 26 Paper 5 Receive: May 3, 25, Revise: Sep, 26, Accepte: Oct 6, 26 Eitor: C.E. Wayne A Nonlinear Heat Equation with Temperature-Depenent

More information

Calculus of Variations

Calculus of Variations Calculus of Variations Lagrangian formalism is the main tool of theoretical classical mechanics. Calculus of Variations is a part of Mathematics which Lagrangian formalism is base on. In this section,

More information

SHARP BOUNDS FOR EXPECTATIONS OF SPACINGS FROM DECREASING DENSITY AND FAILURE RATE FAMILIES

SHARP BOUNDS FOR EXPECTATIONS OF SPACINGS FROM DECREASING DENSITY AND FAILURE RATE FAMILIES APPLICATIONES MATHEMATICAE 3,4 (24), pp. 369 395 Katarzyna Danielak (Warszawa) Tomasz Rychlik (Toruń) SHARP BOUNDS FOR EXPECTATIONS OF SPACINGS FROM DECREASING DENSITY AND FAILURE RATE FAMILIES Abstract.

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13) Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to

More information

Convergence of Langevin MCMC in KL-divergence

Convergence of Langevin MCMC in KL-divergence Convergence of Langevin MCMC in KL-ivergence Xiang Cheng x.cheng@berkeley.eu an Peter Bartlett peter@berkeley.eu Eitor: Abstract Langevin iffusion is a commonly use tool for sampling from a given istribution.

More information

Physics 251 Results for Matrix Exponentials Spring 2017

Physics 251 Results for Matrix Exponentials Spring 2017 Physics 25 Results for Matrix Exponentials Spring 27. Properties of the Matrix Exponential Let A be a real or complex n n matrix. The exponential of A is efine via its Taylor series, e A A n = I + n!,

More information