Least-Squares Regression on Sparse Spaces
|
|
- Isabel Dorsey
- 5 years ago
- Views:
Transcription
1 Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa 1 Introuction Compresse sampling has been stuie in the context of regression theory from two prospectives. One is when given a training set, we aim to compress the set into a smaller size by combining training instances using ranom projections see e.g. [1]. Such metho is useful, for instance, when the training set is too large or one has to hanle privacy issues. Another application is when one uses ranom projections to project each input vector into a lower imensional space, an then train a preictor in the new compresse space compression on the feature space. As is typical of imensionality reuction techniques, this will reuce the variance of most preictors at the expense of introucing some bias. Ranom projections on the feature space, along with least-squares preictors are stuie in [2], an the metho is shown to reuces the estimation error at the price of a controlle approximation error. The analysis in [2] provies on-sample error bouns an extens them to bouns on the sampling measure, assuming an i.i.. sampling strategy. This paper inclues the bias variance analysis of regression in compresse spaces when ranom projections are applie on sparse input signals. We show that the sparsity assumption let us work with arbitrary non i.i.. sampling strategies an we erive a worst-case boun on the entire space. Such a boun can be use to select the optimal size of projection, such as to minimize the sum of expecte estimation an preiction errors. It also provies the means to compare the error of linear preictors in the original an compresse spaces. 2 Notations an Sparsity Assumption Throughout this paper, column vectors are represente by lower case bol letters, an matrices are represente by bol capital letters.. enotes the size of a set, an. 0 is Donoho s zero norm inicating the number of non-zero elements in a vector.. enotes the L 2 norm for vectors an the operator norm for matrices: M = sup v Mv / v. Also, we enote the Moore-Penrose pseuo-inverse of a matrix M with M an the smallest singular value of M by σ M min. We will be working in sparse input spaces for our preiction task. Our input is represente by a vector x X of D features, having x 1. We assume that x is k-sparse in some known or unknown basis Ψ, implying that X {Ψz, s.t. z 0 k an z 1}. 3 Ranom Projections an Inner Prouct It is well known that ranom projections of appropriate sizes preserve enough information for exact reconstruction with high probability see e.g. [3, 4]. In this section, we show that a function almost-linear in the original space is almost linear in the projecte space, when we have ranom projections of appropriate sizes.
2 There are several types of ranom projection matrices that can be use. In this work, we assume that each entry in a projection Φ D is an i.i.. sample from a Gaussian 1 : φ i,j = N 0, 1/. 1 We buil our work on the following base on theorem 4.1 from [3], which shows that for a finite set of points, inner prouct with a fixe vector is almost preserve after a ranom projection. Theorem 1. Let Φ D be a ranom projection accoring to Eqn 1. Let S be a finite set of points in R D. Then for any fixe w R D an ɛ > 0: s S : Φ T w, Φ T s w, s ɛ w s, 2 fails with probability less than 4 S + 2e ɛ2 /48. The above theorem is base on the well-known Johnson Linenstrauss lemma see [3], which consiers ranom projections of finite sets of points. We erive the corresponing theorem for sparse feature spaces. Theorem 2. Let Φ D be a ranom projection accoring to Eqn 1. Let X be a D-imensional k-sparse space. Then for any fixe w an ɛ > 0: fails with probability less than: x X : Φ T w, Φ T x w, x ɛ w x, 3 ed/k k 412/ɛ k + 2e ɛ2 /192 e k log12ed/ɛk ɛ2 /192+log 5. Note that the above theorem oes not require w to be in the sparse space, an thus is ifferent from guarantees on the preservation of inner prouct between vectors in the sparse space. Proof of Theorem 2. The proof follows the steps of the proof of theorem 5.2 from [5]. Because Φ is a linear transformation, we only nee to prove the theorem when w = x = 1. Denote Ψ to be the basis with respect to which X is sparse. Let T {1, 2,..., D} be any set of k inexes. For each set of inexes T, we efine a k-imensional hyperplane in the D-imensional input space: X T {Ψz, s.t. z is zero outsie T an z 1}. By efinition we have X = T X T. We first show that Eqn 3 hols for each X T an then use the union boun to prove the theorem. For any given T, we choose a set S X T such that we have: x X T : min x s ɛ/4. 4 s S It is easy to prove see e.g. Chapter 13 of [6] that these conitions can be satisfie by choosing a gri of size S 12/ɛ k, since X T is a k-imensional hyperplane in R n S fills up the space within ɛ/4 istance. Now applying Theorem 1, an with w = 1 we have that: s S : Φ T w, Φ T s w, s ɛ s, 5 2 fails with probability less than 412/ɛ k + 2e ɛ2 /192. Let a be the smallest number such that: x X T : Φ T w, Φ T x w, x a x, 6 hols when Eqn 5 hols. The goal is to show that a ɛ. For any given x X T, we choose an s S for which x s ɛ/4. Therefore we have: Φ T w, Φ T x w, x Φ T w, Φ T x Φ T w, Φ T s w, x + w, s + 7 Φ T w, Φ T s w, s 8 Φ T w, Φ T x s w, x s + 9 Φ T w, Φ T s w, s 10 aɛ/4 + ɛ/ The elements of the projection are typically taken to be istribute with N 0, 1/D, but we scale them by D/, so that we avoi scaling the projecte values see e.g. [3].
3 The last line is by the efinition of a, an by applying Eqn 5 with high probability. Because of the efinition of a, there is an x X T an by scaling, one with size 1, for which Eqn 6 is tight. Therefore we have a aɛ/4 + ɛ/2, which proves a ɛ for any choice of ɛ < 1. Note that there are D k possible sets T. Since D k ed/k k an X = T X T, the union boun gives us that the theorem fails with probability less than ed/k k 412/ɛ k + 2e ɛ2 / Bias Variance Analysis of Orinary Least-Squares In this section, we analyze the worst case preiction error mae by the orinary least-squares OLS solution. For completeness, we provie bouns on OLS in the original space which is partly a classical result in linear preiction theory. Then, we procee to the main result of this paper, which is the bias variance analysis of OLS in the projecte space. We seek to preict a signal f that is assume to be a near-linear function of x X : fx = x T w + b f x, where b f x ɛ f, 12 for some ɛ f > 0, where we assume w 1. We are given a training set of n input output pairs, consisting of a full-rank input matrix X n D, along with noisy observations of f: y = Xw + b f + η, 13 where for the aitive bias term overloaing the notation b f,i = b f x i ; an we assume a homosceastic noise term η to be a vector of i.i.. ranom variables istribute as N 0, σ 2 η. Given the above, we seek to fin a preictor that for any query x X preicts the target signal fx. The following lemma provies a boun over worst-case error of the orinary least-squares preictor. Lemma 3. Let w ols be the OLS solution of Eqn 13 with aitive bias boune by ɛ f an i.i.. noise with variance ση. 2 Then for any 0 < δ var 2/eπ, for all x X, with probability no less than 1 δ var the error in the OLS preiction follows this boun: fx x T w ols x X ɛ f n + ση log2/πδ 2 var + ɛ f. 14 Proof of Lemma 3. For the OLS solution of Eqn 13 we have: w ols = X y = X Xw + b f + η = w + X b f + X η. 15 Therefore for all x X we have the error: fx x T w ols x T w ols x T w + ɛ f 16 x T X b f + x T X η + ɛ f. 17 For the first term part of preiction bias on the right han sie, we have: x T X b f x T X b f x X ɛ f n. 18 For the secon term in line 17 preiction variance, we have that the expectation of x T X η is 0, as η is inepenent of ata an its expectation is zero. We also know that it is a weighte sum of normally istribute ranom variables, an thus is normal with the variance: Var[x T X η] = E[x T X ηη T X T x] 19 = σ 2 ηx T X X T x 20 σ 2 η x T X X T x 21 σ 2 η x 2 X 2, 22 where in line 20 we use the i.i.. assumption on the noise. Thereby we can boun x T X η by the tail probability of the normal istribution as neee. Using an stanar upper boun on the tail probability of normals, when 0 < δ var 2/eπ, with probability no less than 1 δ var : x T X η σ η x X log2/πδ 2 var. 23 Aing up the bias an the variance term gives us the boun in the lemma.
4 5 Compresse Orinary Least-Squares We are now reay to stuy an upper boun for the worst-case error of the OLS preictor in a compresse space. In this setting, we will first project the inputs into a lower imensional space using ranom projections, an then use the OLS estimator on the compresse input signals. Theorem 4. Let Φ D be a ranom projection accoring to Eqn 1 an w Φ ols be the OLS solution in the compresse space inuce by the projection. Assume an aitive bias in the original space boune by some ɛ f > 0 an i.i.. noise with variance ση. 2 Choose any 0 < δ prj, δ Φ < 1 an 0 < δ var 2/eπ. Then, with probability no less than 1 δ prj + δ Φ, we have x X with probability no less than 1 δ var : fx x T Φw Φ ols ɛ f + ɛ prj n + σ η log2/πδ 2 var σ X D min 2 log δ Φ log 2 24 δ prj 1 where, +ɛ f + ɛ prj, 25 k log log12ed/kδprj ɛ prj = c. Proof of Theorem 4. Using Theorem 2, the following hols with probability no less than 1 δ prj : fx = Φ T x T Φ T w + b f x + b prj x, 26 where b f x ɛ f, b prj x ɛ prj. Note that XΦ X Φ 1/σ X with probability 1 δ Φ : min σφ min σ Φ D 2 log min δφ 1.. Using the boun iscusse in [7], we have Now, using Lemma 3 with the form of a function escribe in Eqn 26, we have: fx x T Φw Φ ols xt Φ XΦ ɛ f + ɛ prj n + σ η log2/πδ 2 var + ɛ f + ɛ prj, 27 which yiels the theorem after the substitution of ɛ prj an matrix norms. Assuming that ɛ f = 0 for simplification we can rewrite the boun as: k fx x T Φw Φ ols Õ 1 n log logd/k + σ X + Õ σ η /D. min D σ X min The first Õ term of the RHS is a part of a bias ue to the projection. The secon Õ term is the variance term. This boun is particularly useful when n > D. With that assumption, in orer to illustrate a more clear bias variance trae-off, assume that X i,j comes from N 0, 1 D. Then we have σ X min n D. Fixing the values for δ s an ignoring the log term slow growing function of, we get that the error is boune by: 1 σ c 0 + c 1 k logd/k η c 2, n in which case we clearly observe the trae-off with respect to the compresse imension. Now if < n < D with X i,j N 0, 1 D, we have σx min 1 n D which gives us the following boun: 1 n σ c 0 + c 1 k logd/k + η + c 2. D n D n This boun is, however, counter-intuitive, as the error grows when n is increase. This is ue to the fact that the boun XΦ X Φ gets looser as n gets close to D. When n is close to D, we might not have a tight close-form boun over the XΦ term, but we can still calculate it empirically for any given value of by sampling a specific projection of the corresponing size.
5 References [1] S. Zhou, J. Lafferty, an L. Wasserman. Compresse regression. In Proceeings of Avances in neural information processing systems, [2] O.A. Maillar an R. Munos. Compresse least-squares regression. In Proceeings of Avances in neural information processing systems, [3] M.A. Davenport, M.B. Wakin, an R.G. Baraniuk. Detection an estimation with compressive measurements. Dept. of ECE, Rice University, Tech. Rep, [4] E.J. Canès an M.B. Wakin. An introuction to compressive sampling. Signal Processing Magazine, IEEE, 252:21 30, [5] R. Baraniuk, M. Davenport, R. DeVore, an M. Wakin. The Johnson Linenstrauss lemma meets compresse sensing. Constructive Approximation, [6] G.G. Lorentz, M. von Golitschek, an Y. Makovoz. Constructive approximation: avance problems, volume 304. Springer Berlin, [7] E.J. Canès an T. Tao. Near-optimal signal recovery from ranom projections: Universal encoing strategies. Information Theory, IEEE Transactions on, 5212: , 2006.
Compressed Least-Squares Regression on Sparse Spaces
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence Compressed Least-Squares Regression on Sparse Spaces Mahdi Milani Fard, Yuri Grinberg, Joelle Pineau, Doina Precup School of Computer
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationA PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,
More informationConcentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification
Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper,
More informationTime-of-Arrival Estimation in Non-Line-Of-Sight Environments
2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationarxiv: v4 [math.pr] 27 Jul 2016
The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,
More informationLinear Regression with Limited Observation
Ela Hazan Tomer Koren Technion Israel Institute of Technology, Technion City 32000, Haifa, Israel ehazan@ie.technion.ac.il tomerk@cs.technion.ac.il Abstract We consier the most common variants of linear
More informationOn combinatorial approaches to compressed sensing
On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationRobustness and Perturbations of Minimal Bases
Robustness an Perturbations of Minimal Bases Paul Van Dooren an Froilán M Dopico December 9, 2016 Abstract Polynomial minimal bases of rational vector subspaces are a classical concept that plays an important
More informationLower Bounds for Local Monotonicity Reconstruction from Transitive-Closure Spanners
Lower Bouns for Local Monotonicity Reconstruction from Transitive-Closure Spanners Arnab Bhattacharyya Elena Grigorescu Mahav Jha Kyomin Jung Sofya Raskhonikova Davi P. Wooruff Abstract Given a irecte
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More information6 General properties of an autonomous system of two first order ODE
6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x
More informationNecessary and Sufficient Conditions for Sketched Subspace Clustering
Necessary an Sufficient Conitions for Sketche Subspace Clustering Daniel Pimentel-Alarcón, Laura Balzano 2, Robert Nowak University of Wisconsin-Maison, 2 University of Michigan-Ann Arbor Abstract This
More informationTractability results for weighted Banach spaces of smooth functions
Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More informationLeaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes
Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,
More informationA. Exclusive KL View of the MLE
A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More informationMonotonicity for excited random walk in high dimensions
Monotonicity for excite ranom walk in high imensions Remco van er Hofsta Mark Holmes March, 2009 Abstract We prove that the rift θ, β) for excite ranom walk in imension is monotone in the excitement parameter
More informationLower Bounds for the Smoothed Number of Pareto optimal Solutions
Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.
More informationLecture 6 : Dimensionality Reduction
CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing
More informationInter-domain Gaussian Processes for Sparse Inference using Inducing Features
Inter-omain Gaussian Processes for Sparse Inference using Inucing Features Miguel Lázaro-Greilla an Aníbal R. Figueiras-Vial Dep. Signal Processing & Communications Universia Carlos III e Mari, SPAIN {miguel,arfv}@tsc.uc3m.es
More informationarxiv: v1 [cs.lg] 22 Mar 2014
CUR lgorithm with Incomplete Matrix Observation Rong Jin an Shenghuo Zhu Dept. of Computer Science an Engineering, Michigan State University, rongjin@msu.eu NEC Laboratories merica, Inc., zsh@nec-labs.com
More informationGaussian processes with monotonicity information
Gaussian processes with monotonicity information Anonymous Author Anonymous Author Unknown Institution Unknown Institution Abstract A metho for using monotonicity information in multivariate Gaussian process
More informationLinear First-Order Equations
5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)
More informationRelative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation
Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University
More informationPDE Notes, Lecture #11
PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =
More informationInfluence of weight initialization on multilayer perceptron performance
Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -
More informationLower bounds on Locality Sensitive Hashing
Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical
More informationIntroduction to the Vlasov-Poisson system
Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its
More informationMulti-View Clustering via Canonical Correlation Analysis
Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationu!i = a T u = 0. Then S satisfies
Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace
More informationRobust Low Rank Kernel Embeddings of Multivariate Distributions
Robust Low Rank Kernel Embeings of Multivariate Distributions Le Song, Bo Dai College of Computing, Georgia Institute of Technology lsong@cc.gatech.eu, boai@gatech.eu Abstract Kernel embeing of istributions
More informationTOEPLITZ AND POSITIVE SEMIDEFINITE COMPLETION PROBLEM FOR CYCLE GRAPH
English NUMERICAL MATHEMATICS Vol14, No1 Series A Journal of Chinese Universities Feb 2005 TOEPLITZ AND POSITIVE SEMIDEFINITE COMPLETION PROBLEM FOR CYCLE GRAPH He Ming( Λ) Michael K Ng(Ξ ) Abstract We
More informationTopic 7: Convergence of Random Variables
Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information
More informationSituation awareness of power system based on static voltage security region
The 6th International Conference on Renewable Power Generation (RPG) 19 20 October 2017 Situation awareness of power system base on static voltage security region Fei Xiao, Zi-Qing Jiang, Qian Ai, Ran
More informationStructural Risk Minimization over Data-Dependent Hierarchies
Structural Risk Minimization over Data-Depenent Hierarchies John Shawe-Taylor Department of Computer Science Royal Holloway an Befor New College University of Lonon Egham, TW20 0EX, UK jst@cs.rhbnc.ac.uk
More informationRobust Bounds for Classification via Selective Sampling
Nicolò Cesa-Bianchi DSI, Università egli Stui i Milano, Italy Clauio Gentile DICOM, Università ell Insubria, Varese, Italy Francesco Orabona Iiap, Martigny, Switzerlan cesa-bianchi@siunimiit clauiogentile@uninsubriait
More informationSchrödinger s equation.
Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of
More informationSparse Reconstruction of Systems of Ordinary Differential Equations
Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA
More information'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21
Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting
More informationFLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction
FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number
More informationSINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES
Communications on Stochastic Analysis Vol. 2, No. 2 (28) 289-36 Serials Publications www.serialspublications.com SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES
More informationPhysics 5153 Classical Mechanics. The Virial Theorem and The Poisson Bracket-1
Physics 5153 Classical Mechanics The Virial Theorem an The Poisson Bracket 1 Introuction In this lecture we will consier two applications of the Hamiltonian. The first, the Virial Theorem, applies to systems
More informationQubit channels that achieve capacity with two states
Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March
More informationarxiv: v1 [cs.it] 21 Aug 2017
Performance Gains of Optimal Antenna Deployment for Massive MIMO ystems Erem Koyuncu Department of Electrical an Computer Engineering, University of Illinois at Chicago arxiv:708.06400v [cs.it] 2 Aug 207
More informationCapacity Analysis of MIMO Systems with Unknown Channel State Information
Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,
More informationImage Denoising Using Spatial Adaptive Thresholding
International Journal of Engineering Technology, Management an Applie Sciences Image Denoising Using Spatial Aaptive Thresholing Raneesh Mishra M. Tech Stuent, Department of Electronics & Communication,
More informationEquilibrium in Queues Under Unknown Service Times and Service Value
University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 1-2014 Equilibrium in Queues Uner Unknown Service Times an Service Value Laurens Debo Senthil K. Veeraraghavan University
More informationGeneralized Tractability for Multivariate Problems
Generalize Tractability for Multivariate Problems Part II: Linear Tensor Prouct Problems, Linear Information, an Unrestricte Tractability Michael Gnewuch Department of Computer Science, University of Kiel,
More informationLecture 2: Correlated Topic Model
Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables
More informationLogarithmic spurious regressions
Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate
More informationBinary Discrimination Methods for High Dimensional Data with a. Geometric Representation
Binary Discrimination Methos for High Dimensional Data with a Geometric Representation Ay Bolivar-Cime, Luis Miguel Corova-Roriguez Universia Juárez Autónoma e Tabasco, División Acaémica e Ciencias Básicas
More informationSome Examples. Uniform motion. Poisson processes on the real line
Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]
More informationA New Minimum Description Length
A New Minimum Description Length Soosan Beheshti, Munther A. Dahleh Laboratory for Information an Decision Systems Massachusetts Institute of Technology soosan@mit.eu,ahleh@lis.mit.eu Abstract The minimum
More informationEnergy behaviour of the Boris method for charged-particle dynamics
Version of 25 April 218 Energy behaviour of the Boris metho for charge-particle ynamics Ernst Hairer 1, Christian Lubich 2 Abstract The Boris algorithm is a wiely use numerical integrator for the motion
More informationA LIMIT THEOREM FOR RANDOM FIELDS WITH A SINGULARITY IN THE SPECTRUM
Teor Imov r. ta Matem. Statist. Theor. Probability an Math. Statist. Vip. 81, 1 No. 81, 1, Pages 147 158 S 94-911)816- Article electronically publishe on January, 11 UDC 519.1 A LIMIT THEOREM FOR RANDOM
More informationAnalyzing Tensor Power Method Dynamics in Overcomplete Regime
Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationAll s Well That Ends Well: Supplementary Proofs
All s Well That Ens Well: Guarantee Resolution of Simultaneous Rigi Boy Impact 1:1 All s Well That Ens Well: Supplementary Proofs This ocument complements the paper All s Well That Ens Well: Guarantee
More information. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.
S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationModelling and simulation of dependence structures in nonlife insurance with Bernstein copulas
Moelling an simulation of epenence structures in nonlife insurance with Bernstein copulas Prof. Dr. Dietmar Pfeifer Dept. of Mathematics, University of Olenburg an AON Benfiel, Hamburg Dr. Doreen Straßburger
More informationHyperbolic Moment Equations Using Quadrature-Based Projection Methods
Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationConservation Laws. Chapter Conservation of Energy
20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action
More informationarxiv: v4 [cs.ds] 7 Mar 2014
Analysis of Agglomerative Clustering Marcel R. Ackermann Johannes Blömer Daniel Kuntze Christian Sohler arxiv:101.697v [cs.ds] 7 Mar 01 Abstract The iameter k-clustering problem is the problem of partitioning
More informationAn Analytical Expression of the Probability of Error for Relaying with Decode-and-forward
An Analytical Expression of the Probability of Error for Relaying with Decoe-an-forwar Alexanre Graell i Amat an Ingmar Lan Department of Electronics, Institut TELECOM-TELECOM Bretagne, Brest, France Email:
More informationSTATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING
STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING Mark A. Kon Department of Mathematics an Statistics Boston University Boston, MA 02215 email: mkon@bu.eu Anrzej Przybyszewski
More informationDatabase-friendly Random Projections
Database-frienly Ranom Projections Dimitris Achlioptas Microsoft ABSTRACT A classic result of Johnson an Linenstrauss asserts that any set of n points in -imensional Eucliean space can be embee into k-imensional
More informationLie symmetry and Mei conservation law of continuum system
Chin. Phys. B Vol. 20, No. 2 20 020 Lie symmetry an Mei conservation law of continuum system Shi Shen-Yang an Fu Jing-Li Department of Physics, Zhejiang Sci-Tech University, Hangzhou 3008, China Receive
More informationRamsey numbers of some bipartite graphs versus complete graphs
Ramsey numbers of some bipartite graphs versus complete graphs Tao Jiang, Michael Salerno Miami University, Oxfor, OH 45056, USA Abstract. The Ramsey number r(h, K n ) is the smallest positive integer
More informationOptimal CDMA Signatures: A Finite-Step Approach
Optimal CDMA Signatures: A Finite-Step Approach Joel A. Tropp Inst. for Comp. Engr. an Sci. (ICES) 1 University Station C000 Austin, TX 7871 jtropp@ices.utexas.eu Inerjit. S. Dhillon Dept. of Comp. Sci.
More informationOn conditional moments of high-dimensional random vectors given lower-dimensional projections
Submitte to the Bernoulli arxiv:1405.2183v2 [math.st] 6 Sep 2016 On conitional moments of high-imensional ranom vectors given lower-imensional projections LUKAS STEINBERGER an HANNES LEEB Department of
More informationResistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas
Resistant Polynomials an Stronger Lower Bouns for Depth-Three Arithmetical Formulas Maurice J. Jansen University at Buffalo Kenneth W.Regan University at Buffalo Abstract We erive quaratic lower bouns
More informationPerfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs
Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite
More informationHarmonic Modelling of Thyristor Bridges using a Simplified Time Domain Method
1 Harmonic Moelling of Thyristor Briges using a Simplifie Time Domain Metho P. W. Lehn, Senior Member IEEE, an G. Ebner Abstract The paper presents time omain methos for harmonic analysis of a 6-pulse
More informationAcute sets in Euclidean spaces
Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of
More informationOn the Surprising Behavior of Distance Metrics in High Dimensional Space
On the Surprising Behavior of Distance Metrics in High Dimensional Space Charu C. Aggarwal, Alexaner Hinneburg 2, an Daniel A. Keim 2 IBM T. J. Watson Research Center Yortown Heights, NY 0598, USA. charu@watson.ibm.com
More informationThe Subtree Size Profile of Plane-oriented Recursive Trees
The Subtree Size Profile of Plane-oriente Recursive Trees Michael FUCHS Department of Applie Mathematics National Chiao Tung University Hsinchu, 3, Taiwan Email: mfuchs@math.nctu.eu.tw Abstract In this
More informationThe Role of Models in Model-Assisted and Model- Dependent Estimation for Domains and Small Areas
The Role of Moels in Moel-Assiste an Moel- Depenent Estimation for Domains an Small Areas Risto Lehtonen University of Helsini Mio Myrsylä University of Pennsylvania Carl-Eri Särnal University of Montreal
More informationA Spectral Method for the Biharmonic Equation
A Spectral Metho for the Biharmonic Equation Kenall Atkinson, Davi Chien, an Olaf Hansen Abstract Let Ω be an open, simply connecte, an boune region in Ê,, with a smooth bounary Ω that is homeomorphic
More informationStable and compact finite difference schemes
Center for Turbulence Research Annual Research Briefs 2006 2 Stable an compact finite ifference schemes By K. Mattsson, M. Svär AND M. Shoeybi. Motivation an objectives Compact secon erivatives have long
More informationON ISENTROPIC APPROXIMATIONS FOR COMPRESSIBLE EULER EQUATIONS
ON ISENTROPIC APPROXIMATIONS FOR COMPRESSILE EULER EQUATIONS JUNXIONG JIA AND RONGHUA PAN Abstract. In this paper, we first generalize the classical results on Cauchy problem for positive symmetric quasilinear
More informationFast image compression using matrix K-L transform
Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China.
More informationMath 342 Partial Differential Equations «Viktor Grigoryan
Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More information19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control
19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationPermanent vs. Determinant
Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an
More informationAgmon Kolmogorov Inequalities on l 2 (Z d )
Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,
More informationTechnion - Computer Science Department - M.Sc. Thesis MSC Constrained Codes for Two-Dimensional Channels.
Technion - Computer Science Department - M.Sc. Thesis MSC-2006- - 2006 Constraine Coes for Two-Dimensional Channels Keren Censor Technion - Computer Science Department - M.Sc. Thesis MSC-2006- - 2006 Technion
More informationSwitching Time Optimization in Discretized Hybrid Dynamical Systems
Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set
More informationNode Density and Delay in Large-Scale Wireless Networks with Unreliable Links
Noe Density an Delay in Large-Scale Wireless Networks with Unreliable Links Shizhen Zhao, Xinbing Wang Department of Electronic Engineering Shanghai Jiao Tong University, China Email: {shizhenzhao,xwang}@sjtu.eu.cn
More informationALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS
ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS MARK SCHACHNER Abstract. When consiere as an algebraic space, the set of arithmetic functions equippe with the operations of pointwise aition an
More informationWEIGHTING A RESAMPLED PARTICLE IN SEQUENTIAL MONTE CARLO. L. Martino, V. Elvira, F. Louzada
WEIGHTIG A RESAMPLED PARTICLE I SEQUETIAL MOTE CARLO L. Martino, V. Elvira, F. Louzaa Dep. of Signal Theory an Communic., Universia Carlos III e Mari, Leganés (Spain). Institute of Mathematical Sciences
More information