Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.
|
|
- Maud Lawson
- 5 years ago
- Views:
Transcription
1 Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these notes are still very rough. They provide more details on what we disussed in lass, but there may still be some errors, inomplete/impreise statements, et. in them. 7 Sampling/Projetions for Least-squares Approximation, Cont. We ontinue with the disussion from last time. There is no new reading, just the same as last lass. Reall that last time we provided a brief overview of LS problems and a brief overview of skething methods for LS problems. For the latter, we provided a lemma that showed that under ertain onditions the solution of a skethed LS problem was a good approximation to the solution of the original LS problem, where good is with respet to the objetive funtion value. Today, we will fous on three things. Establishing goodness results for the skethed LS problem, where goodness is with respet to the ertifiate or solution vetor. Relating these two strutural onditions and the satisfation of the two onditions by random sampling/projetion to exat and approximate matrix multipliation algorithms. Putting everything together into two basi (but still slow we ll speed them up soon enough) RandNLA algorithms for the LS problem. 7.1 Deterministi and randomized skethes and LS problems, ont. Last time we identified two strutural onditions, and we proved that if those strutural onditions are satisfied by a skething matrix, then the solution to the subproblem defined by that skething matrix has a solution that is a relative-error approximation to the original problem, i.e., that the objetive funtion value of the original problem is approximate well. Now, we will prove that the vetor itself solving the subproblem is a good approximation of the vetor solving the original problem. After that, we will show that random sampling and random projetion matries satisfy those two strutural onditions, for appropriate values of the parameter settings. Lemma 1 Same setup as the previous lemma. Then x opt x opt 1 ɛz. (1) σ min (A) 1
2 Proof: If we use the same notation as in the proof of the previous lemma, then A(x opt x opt ) = U A z opt. If we take the norm of both sides of this expression, we have that x opt x opt U Az opt σ min (A) () ɛz σ min (A), (3) where () follows sine σ min (A) is the smallest singular value of A and sine the rank of A is d; and (3) follows by a result in the proof of the previous lemma and the orthogonality of U A. Taking the square root, the seond laim of the lemma follows. If we make no assumption on b, then (1) from Lemma 1 may provide a weak bound in terms of x opt. If, on the other hand, we make the additional assumption that a onstant fration of the norm of b lies in the subspae spanned by the olumns of A, then (1) an be strengthened. Suh an assumption is reasonable, sine most least-squares problems are pratially interesting if at least some part of b lies in the subspae spanned by the olumns of A. Lemma Same setup as the previous lemma, and assume that U A UA T b γ b, for some fixed γ (0, 1]. Then, it follows that x opt x opt ( ɛ κ(a) ) γ 1 x opt. (4) Proof: Sine U A U T A b γ b, it follows that Z = b UA U T A b (γ 1) U A U T A b σ max(a)(γ 1) x opt. This last inequality follows from U A U T A b = Ax opt, whih implies UA U T A b = Ax opt A x opt = σ max (A) x opt. By ombining this with eqn. (1) of Lemma 1, the lemma follows. 7. Connetions with exat and approximate matrix multipliation 7..1 An aside: approximating matrix multipliation for vetor inputs Before ontinuing with our disussion of LS regression, here is a simple example of applying the matrix multipliation ideas that might help shed some light on the form of the bounds as well as when the bounds are tight and when they are not. Let s say that we have two vetors x, y R n and we want to approximate their produt by random sampling. In this ase, we are approximating x T y as x T SS T y, where S is a random sampling
3 matrix that, let s assume, is onstruted with nearly optimal sampling probabilities. Then, our main bound says that, under appropriate assumptions on the number n of random samples we draw, then we get a bound of the form x T y x T SS T y F ɛ x F y F, whih, sine we are dealing with the produt of two vetors simplifies to x T y x T SS T y ɛ x y. The question is: when is this bound tight, and when is this bound loose, as a funtion of the input data? Clearly, if x y, i.e., if x T y = 0, then this bound will be weak. In the other hand if y = x, then x T y = x T x = x, in whih ase this bound says that x T x x T SS T x ɛ x, meaning that the algorithm provides a relative error guarantee on x T x = x. (We an make similar statements more generally if we are multiplying two retangular orthogonal matries to form a low-dimensional identity, and this is important for providing subspae-preserving skethes.) The lesson here is that when there is anellation the bound is weak, and that the sales set by the norms of the omponent matries are in some sense real. For general matries, the situation is more omplex, sine subspae an interat in more ompliated ways, but the similar ideas goes through. 7.. Understanding and exploiting these strutural onditions via approximate matrix multipliation These lemmas say that if our skething matrix X satisfies Condition I and Condition II, then we have relative-error approximation on both the solution vetor/ertifiate and on the value of the objetive at the optimum. There are a number of things we an do with this, and here we will fous on establishing a prioi running time guarantees for any, i.e., worst-ase, input. But, before we get into the algorithmi details, however, we will outline how these strutural onditions relate to our previous approximate matrix multipliation results, and how we will use the latter to prove our results. The main point to note is that both Condition I and Condition II an be expressed as approximate matrix multipliations and thus bounded by our approximate matrix multipliation results from a few lasses ago. To see this, observe that a slightly stronger ondition than Condition I is that 1 σ 1 (XU A ) 1/ i, and that U A is an n d orthogonal matrix, with n d, we have that U T A U A = I d, and so this latter ondition says that U T A U A U T A XT XU A in the spetral norm, i.e., that I (XU A ) T XU A 1/. Similarly, sine UA T b = 0, Condition II says that 0 UA T XT Xb with respet to the Frobenius norm, i.e., that UA T X T Xb ɛ F Z 3
4 Of ourse, this is simply the Eulidean norm, sine b is simply a vetor. For general matries, for the Frobenius norm, the sale of the right hand side error, i.e., the quantity that is multiplied by the ɛ, depends on the norm of the matries entering into the produt. But, the norm of b is simply the residual value Z, whih sets the sale of the error and of the solution. And, for general matries, for the spetral norm, there were quantities that depended on the spetral and Frobenius norm of the input matries, but for orthogonal matries like U A, those are 1 or the low dimension d, and so they an be absorbed into the sampling omplexity Bounds on approximate matrix multipliation when information about both matries is unavailable The situation about bounding the error inurred in the two strutural onditions is atually somewhat more subtle than the previous disussion would imply. The reason is that, although we might have aess of information suh as the leverage sores (row norms of one matrix) that depend on U A, we in general don t have aess to any information in b (and thus the row norms of it). Nevertheless, an extension of our previous disussion still holds, and we will desribe it now. Observe that the nearly optimal probabilities p k β A (k) B(k) A (k ) B(k, ) n k =1 for approximating the produt of two general matries A and B use information from both matries A and B in a very partiular form. In some ases, suh detailed information about both matries may not be available. In partiular, in some ases, we will be interested in approximating the produt of two different matries, A and B, when only information about A (or, equivalently, only B) is available. Somewhat surprisingly, in this ase, we an still obtain partial bounds (i.e., similar form, but with slightly weaker onentration) of the form we saw above. Here, we present results for the BasiMatrixMultipliation algorithm for two other sets of probabilities. In the first ase, to estimate the produt AB one ould use the probabilities (5) whih use information from the matrix A only. In this ase AB CR F an still be shown to be small in expetation; the proof of this lemma is similar to that of our theorem for the nearly-optimal probabilities from a few lasses ago, exept that the indiated probabilities are used. Lemma 3 Suppose A R m n, B R n p, Z + suh that 1 n, and {p i } n i=1 n i=1 p i = 1 and suh that are suh that p k β A (k) A (5) F for some positive onstant β 1. Construt C and R with the BasiMatrixMultipliation algorithm, and let CR be an approximation to AB. Then: [ ] E AB CR F 1 β A F B F. (6) Following the analysis of our theorem for the nearly-optimal probabilities from a few lasses ago, we an let M = max α B (α) A (α), let δ (0, 1) and let η = 1 + A F B F M (8/β) log(1/δ), in whih ase 4
5 it an be shown that, with probability at least 1 δ: AB CR F η β A F B F. Unfortunately, the assumption on M, whih depends on the maximum ratio of two vetor norms, is suffiiently awkward that this result is not useful. Nevertheless, we an still remove the expetation from Eqn. (6) with Markov s inequality, paying the fator of 1/δ, but without any awkward assumptions, assuming that we are willing to live with a result that holds with onstant probability. This will be fine for several appliations we will enounter, and when we use Lemma 3, this is how we will use it. We should emphasize that for most probabilities, e.g., even simple probabilities that are proportional to (say) the Eulidean norm of the olumns of A (as opposed to the norm-squared of the olumns of A or the produt of the norms of the olumns of A and the orresponding rows of B), we obtain muh uglier and unusable expressions, e.g., we get awkward fators suh as M above. Lest the reader thing that any sampling probabilities will yield interesting results, even for the expetation, here are the analogous results if sampling is performed u.a.r. Note that the saling fator of n is muh worse than anything we have seen so far and it means that we would have to hoose to be larger than n to obtain nontrivial results, learly defeating the point of random sampling in the first plae. Lemma 4 Suppose A R m n, B R n p, Z + suh that 1 n, and {p i } n i=1 are suh that p k = 1 n. (7) Construt C and R with the BasiMatrixMultipliation algorithm, and let CR be an approximation to AB. Then: ( n ) n 1/ E [ AB CR F ] A (k) B(k). (8) k=1 Furthermore, let δ (0, 1) and γ = n 8 log (1/δ) maxα A (α) B (α) ; then with probability at least 1 δ: ( n ) n 1/ AB CR F A (k) B(k) + γ. (9) k=1 7.3 Random sampling and random projetion for LS approximation So, to take advantage of the above two strutural results and bound them with our matrix multpliation bounds, we need to perform the random sampling with respet to the so-alled statistial leverage sores, whih are defined as U (i), where U (i) is the i th row of any orthogonal matrix for span(a). If we normalize them, then we get the leverage sore probabilities: p i = 1 d U (i). (10) These will be important for our subsequent disussion, and so there are several things we should note about them. 5
6 Sine U is an n d orthogonal matrix, the normalization is just the lower dimension d, i.e., d = U F. Although we have defined these sores i.t.o. a partiular basis U, they don t depend on that partiular basis, but instead they depend on A, or atually on span(a). To see this, let P A = AA + be a projetion onto the span of A, and note that P A = QRR 1 Q = QQ T, where R is any square non-singular orthogonal transformation between orthogonal matries for span(a). So, in partiular, up to the saling fator of 1 d, the leverage sores equal the diagonal elements of the projetion matrix P A : (P A ) ii = ( U A UA T )ii = U A(i) = ( Q A Q T A )ii = QA(i) Thus, they are equal to the diagonal elements of the so-alled hat matrix. These are sores that quantify where in the high-dimensional spae R n the (singular value) information in A is being sent (independent of what that information is). They apture a notion of leverage or influene that the i th onstraint has on the LS fit. [ ] I They an be very uniform or very nonuniform. E.g., if U A =, then they are learly very 0 nonuniform, but if U A onsists of a small number of olumns from a trunated Hadamard matrix or a dense Gaussian matrix, then they are uniform or nearly uniform. With that in plae, here we will present two algorithms that ompute relative-error approximations to the LS problem. First, we start with a random sampling algorithm, given as Algorithm 1. Algorithm 1 A slow random sampling algorithm for the LS problem. Input: An n d matrix A, with n d, an n-vetor b Output: A d-vetor x opt U(i) 1: Compute p i = 1 d, for all i [n], from the QR or the SVD. : Randomly sample r O( d log d ɛ ) rows of A and elements of b, resaling eah by 1 SA and Sb 3: Solve min x R d SAx Sb with a blak box to get x opt 4: Return x opt rp it, i.e., form For this algorithm, one an prove the following theorem. The idea of the proof is to ombine the strutural lemma with matrix multipliation bounds that show that under appropriate assumptions on the size of the sample, et., that the two strutural onditions are satisfied. Theorem 1 Algorithm 1 returns a (1±ɛ)-approximation to the LS objetive and an ɛ-approximation to the solution vetor. Next, we start with a random projetion algorithm, given as Algorithm. For this algorithm, one an prove the following theorem. As before, the idea of the proof is to ombine the strutural lemma with the random projetion version of matrix multipliation bounds 6
7 Algorithm A slow random projetion algorithm for the LS problem. Input: An n d matrix A, with n d, an n-vetor b Output: A d-vetor x opt 1: Let S be a random projetion matrix onsisting of saled i.i.d. Gaussians, {±1}, et., random variables. : Randomly projet onto r O( d log d ɛ ) rows, i.e., linear ombination of rows of A and elements of b 3: Solve min x R d SAx Sb with a blak box to get x opt 4: Return x opt that are in the first homework to show that under appropriate assumptions on the size of the sample, et., that the two strutural onditions are satisfied. Theorem Algorithm returns a (1±ɛ)-approximation to the LS objetive and an ɛ-approximation to the solution vetor. We are not going to go into the details of the proofs of these two theorems, basially sine they will parallel proofs of fast versions of these two results that we will disuss in the next few lasses. But, it worth pointing out that you do get good quality-of-approximation bounds for the LS problem with these algorithms. The problem is the running time. Both of these algorithms take at least as long to run (at least in terms of worst-ase FLOPS in the RAM model) as the time to solve the problem exatly with traditional deterministi algorithms, i.e., Θ(nd ) time. For Algorithm 1, the bottlenek in running time is the time to ompute the leverage sore importane sampling distribution exatly. For Algorithm, the bottlenek in running time is the time to implement the random projetion, i.e., to do the matrix-matrixmultipliation assoiated with the random projetion, and sine we are projeting onto roughly d log d dimensions the running time is atually Ω(nd ). Thus, they are slow sine they are slower than a traditional algorithms at least in terms of FLOPs in an idealized RAM model, but note that they may, and in some ases are, faster on real mahines, basially for ommuniation reasons, and similarly they might be faster in parallel-distributed environments. In partiular, the random projetion is just matrix-matrix multipliation, and this an be faster than doing things like QR or the SVD, even if the FLOP ount is the same. But, we will fous on FLOPS and so we want algorithms to runs in o(nd ) time. We will use strutured or Hadamard-based random projetions, whih an be implemented with Fast Fourier methods, so that the overall running time will be o(nd ). There will be two ways to do this: first, all a blak box (the running time bottlenek of whih is a Hadamard-based random projetion) to approximate the leverage sores, and use them as the importane sampling distribution; and seond, do a Hadamard-based random projetion to uniformize the leverage sores and sample uniformly. In the next few lasses, we will get into these issues. Why random projetions satisfy matrix multipliation bounds might be a bit of a mystery, partly sine we have foused less on it, so we get into the details of two related forms of the random projetion. Also, the blak box to approximate the leverage sores might be surprising, sine it isn t obvious that they an be omputed quikly, so we will get into that. All of the results we will desribe will also hold for general random sampling with exat leverage sores and general random projetions, but we will get into the details for the fast versions, so we an make running time laims for analogues of the fast sampling and projetion versions of above two algorithms. 7
Lecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.
Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:
More informationmax min z i i=1 x j k s.t. j=1 x j j:i T j
AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be
More informationMaximum Entropy and Exponential Families
Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It
More informationAdvanced Computational Fluid Dynamics AA215A Lecture 4
Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas
More informationHankel Optimal Model Order Reduction 1
Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both
More informationMethods of evaluating tests
Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed
More informationCMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017
CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,
More informationComplexity of Regularization RBF Networks
Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw
More informationChapter 8 Hypothesis Testing
Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two
More informationLECTURE NOTES FOR , FALL 2004
LECTURE NOTES FOR 18.155, FALL 2004 83 12. Cone support and wavefront set In disussing the singular support of a tempered distibution above, notie that singsupp(u) = only implies that u C (R n ), not as
More informationRobust Recovery of Signals From a Structured Union of Subspaces
Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling
More informationA Characterization of Wavelet Convergence in Sobolev Spaces
A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat
More informationQ2. [40 points] Bishop-Hill Model: Calculation of Taylor Factors for Multiple Slip
27-750, A.D. Rollett Due: 20 th Ot., 2011. Homework 5, Volume Frations, Single and Multiple Slip Crystal Plastiity Note the 2 extra redit questions (at the end). Q1. [40 points] Single Slip: Calulating
More informationCoding for Random Projections and Approximate Near Neighbor Search
Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu
More informationProbabilistic Graphical Models
Probabilisti Graphial Models David Sontag New York University Leture 12, April 19, 2012 Aknowledgement: Partially based on slides by Eri Xing at CMU and Andrew MCallum at UMass Amherst David Sontag (NYU)
More informationStochastic Combinatorial Optimization with Risk Evdokia Nikolova
Computer Siene and Artifiial Intelligene Laboratory Tehnial Report MIT-CSAIL-TR-2008-055 September 13, 2008 Stohasti Combinatorial Optimization with Risk Evdokia Nikolova massahusetts institute of tehnology,
More informationGEOMETRY FOR 3D COMPUTER VISION
GEOMETRY FOR 3D COMPUTER VISION 1/29 Válav Hlaváč Czeh Tehnial University, Faulty of Eletrial Engineering Department of Cybernetis, Center for Mahine Pereption 121 35 Praha 2, Karlovo nám. 13, Czeh Republi
More informationA Queueing Model for Call Blending in Call Centers
A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl
More informationDanielle Maddix AA238 Final Project December 9, 2016
Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat
More informationScalable Positivity Preserving Model Reduction Using Linear Energy Functions
Salable Positivity Preserving Model Redution Using Linear Energy Funtions Sootla, Aivar; Rantzer, Anders Published in: IEEE 51st Annual Conferene on Deision and Control (CDC), 2012 DOI: 10.1109/CDC.2012.6427032
More informationLecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving
Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney
More informationMoments and Wavelets in Signal Estimation
Moments and Wavelets in Signal Estimation Edward J. Wegman 1 Center for Computational Statistis George Mason University Hung T. Le 2 International usiness Mahines Abstrat: The problem of generalized nonparametri
More informationPerturbation Analyses for the Cholesky Factorization with Backward Rounding Errors
Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation
More informationProduct Policy in Markets with Word-of-Mouth Communication. Technical Appendix
rodut oliy in Markets with Word-of-Mouth Communiation Tehnial Appendix August 05 Miro-Model for Inreasing Awareness In the paper, we make the assumption that awareness is inreasing in ustomer type. I.e.,
More informationTutorial 4 (week 4) Solutions
THE UNIVERSITY OF SYDNEY PURE MATHEMATICS Summer Shool Math26 28 Tutorial week s You are given the following data points: x 2 y 2 Construt a Lagrange basis {p p p 2 p 3 } of P 3 using the x values from
More informationTENSOR FORM OF SPECIAL RELATIVITY
TENSOR FORM OF SPECIAL RELATIVITY We begin by realling that the fundamental priniple of Speial Relativity is that all physial laws must look the same to all inertial observers. This is easiest done by
More informationMath 151 Introduction to Eigenvectors
Math 151 Introdution to Eigenvetors The motivating example we used to desrie matrixes was landsape hange and vegetation suession. We hose the simple example of Bare Soil (B), eing replaed y Grasses (G)
More informationGeneral Equilibrium. What happens to cause a reaction to come to equilibrium?
General Equilibrium Chemial Equilibrium Most hemial reations that are enountered are reversible. In other words, they go fairly easily in either the forward or reverse diretions. The thing to remember
More informationComputer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1
Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random
More informationSOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS
SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS Prepared by S. Broverman e-mail 2brove@rogers.om website http://members.rogers.om/2brove 1. We identify the following events:. - wathed gymnastis, ) - wathed baseball,
More information23.1 Tuning controllers, in the large view Quoting from Section 16.7:
Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output
More informationPhysics 523, General Relativity Homework 4 Due Wednesday, 25 th October 2006
Physis 523, General Relativity Homework 4 Due Wednesday, 25 th Otober 2006 Jaob Lewis Bourjaily Problem Reall that the worldline of a ontinuously aelerated observer in flat spae relative to some inertial
More informationLecture 3 - Lorentz Transformations
Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the
More informationGeneralized Dimensional Analysis
#HUTP-92/A036 7/92 Generalized Dimensional Analysis arxiv:hep-ph/9207278v1 31 Jul 1992 Howard Georgi Lyman Laboratory of Physis Harvard University Cambridge, MA 02138 Abstrat I desribe a version of so-alled
More informationCase I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1
MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues
More informationControl Theory association of mathematics and engineering
Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology
More informationThe Effectiveness of the Linear Hull Effect
The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports
More informationMost results in this section are stated without proof.
Leture 8 Level 4 v2 he Expliit formula. Most results in this setion are stated without proof. Reall that we have shown that ζ (s has only one pole, a simple one at s =. It has trivial zeros at the negative
More informationSURFACE WAVES OF NON-RAYLEIGH TYPE
SURFACE WAVES OF NON-RAYLEIGH TYPE by SERGEY V. KUZNETSOV Institute for Problems in Mehanis Prosp. Vernadskogo, 0, Mosow, 75 Russia e-mail: sv@kuznetsov.msk.ru Abstrat. Existene of surfae waves of non-rayleigh
More informationREFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction
Version of 5/2/2003 To appear in Advanes in Applied Mathematis REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS MATTHIAS BECK AND SHELEMYAHU ZACKS Abstrat We study the Frobenius problem:
More informationEECS 120 Signals & Systems University of California, Berkeley: Fall 2005 Gastpar November 16, Solutions to Exam 2
EECS 0 Signals & Systems University of California, Berkeley: Fall 005 Gastpar November 6, 005 Solutions to Exam Last name First name SID You have hour and 45 minutes to omplete this exam. he exam is losed-book
More informationMillennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion
Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six
More informationWhere as discussed previously we interpret solutions to this partial differential equation in the weak sense: b
Consider the pure initial value problem for a homogeneous system of onservation laws with no soure terms in one spae dimension: Where as disussed previously we interpret solutions to this partial differential
More informationThe Hanging Chain. John McCuan. January 19, 2006
The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a
More informationMBS TECHNICAL REPORT 17-02
MBS TECHNICAL REPORT 7-02 On a meaningful axiomati derivation of some relativisti equations Jean-Claude Falmagne University of California, Irvine Abstrat The mathematial expression of a sientifi or geometri
More informationA Unified View on Multi-class Support Vector Classification Supplement
Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik
More information4.3 Singular Value Decomposition and Analysis
4.3 Singular Value Deomposition and Analysis A. Purpose Any M N matrix, A, has a Singular Value Deomposition (SVD) of the form A = USV t where U is an M M orthogonal matrix, V is an N N orthogonal matrix,
More informationLECTURES 4 & 5: POINCARÉ SERIES
LECTURES 4 & 5: POINCARÉ SERIES ANDREW SNOWDEN These are notes from my two letures on Poinaré series from the 2016 Learning Seminar on Borherds produts. I begin by reviewing lassial Poinaré series, then
More informationAn iterative least-square method suitable for solving large sparse matrices
An iteratie least-square method suitable for soling large sparse matries By I. M. Khabaza The purpose of this paper is to report on the results of numerial experiments with an iteratie least-square method
More informationTransformation to approximate independence for locally stationary Gaussian processes
ransformation to approximate independene for loally stationary Gaussian proesses Joseph Guinness, Mihael L. Stein We provide new approximations for the likelihood of a time series under the loally stationary
More informationLesson 23: The Defining Equation of a Line
Student Outomes Students know that two equations in the form of ax + y = and a x + y = graph as the same line when a = = and at least one of a or is nonzero. a Students know that the graph of a linear
More informationModel-based mixture discriminant analysis an experimental study
Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,
More informationRelativity in Classical Physics
Relativity in Classial Physis Main Points Introdution Galilean (Newtonian) Relativity Relativity & Eletromagnetism Mihelson-Morley Experiment Introdution The theory of relativity deals with the study of
More informationConvergence of reinforcement learning with general function approximators
Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu
More information10.5 Unsupervised Bayesian Learning
The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution
More informationA NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM
NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum
More information18.05 Problem Set 6, Spring 2014 Solutions
8.5 Problem Set 6, Spring 4 Solutions Problem. pts.) a) Throughout this problem we will let x be the data of 4 heads out of 5 tosses. We have 4/5 =.56. Computing the likelihoods: 5 5 px H )=.5) 5 px H
More informationSensitivity Analysis in Markov Networks
Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores
More informationSolving a system of linear equations Let A be a matrix, X a column vector, B a column vector then the system of linear equations is denoted by AX=B.
Matries and Vetors: Leture Solving a sstem of linear equations Let be a matri, X a olumn vetor, B a olumn vetor then the sstem of linear equations is denoted b XB. The augmented matri The solution to a
More informationModes are solutions, of Maxwell s equation applied to a specific device.
Mirowave Integrated Ciruits Prof. Jayanta Mukherjee Department of Eletrial Engineering Indian Institute of Tehnology, Bombay Mod 01, Le 06 Mirowave omponents Welome to another module of this NPTEL mok
More informationDetermination of the reaction order
5/7/07 A quote of the wee (or amel of the wee): Apply yourself. Get all the eduation you an, but then... do something. Don't just stand there, mae it happen. Lee Iaoa Physial Chemistry GTM/5 reation order
More informationSPLINE ESTIMATION OF SINGLE-INDEX MODELS
SPLINE ESIMAION OF SINGLE-INDEX MODELS Li Wang and Lijian Yang University of Georgia and Mihigan State University Supplementary Material his note ontains proofs for the main results he following two propositions
More informationarxiv: v2 [math.pr] 9 Dec 2016
Omnithermal Perfet Simulation for Multi-server Queues Stephen B. Connor 3th Deember 206 arxiv:60.0602v2 [math.pr] 9 De 206 Abstrat A number of perfet simulation algorithms for multi-server First Come First
More informationViewing the Rings of a Tree: Minimum Distortion Embeddings into Trees
Viewing the Rings of a Tree: Minimum Distortion Embeddings into Trees Amir Nayyeri Benjamin Raihel Abstrat We desribe a 1+ε) approximation algorithm for finding the minimum distortion embedding of an n-point
More informationMeasuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach
Measuring & Induing Neural Ativity Using Extraellular Fields I: Inverse systems approah Keith Dillon Department of Eletrial and Computer Engineering University of California San Diego 9500 Gilman Dr. La
More informationA variant of Coppersmith s Algorithm with Improved Complexity and Efficient Exhaustive Search
A variant of Coppersmith s Algorithm with Improved Complexity and Effiient Exhaustive Searh Jean-Sébastien Coron 1, Jean-Charles Faugère 2, Guénaël Renault 2, and Rina Zeitoun 2,3 1 University of Luxembourg
More informationChapter 2. Conditional Probability
Chapter. Conditional Probability The probabilities assigned to various events depend on what is known about the experimental situation when the assignment is made. For a partiular event A, we have used
More informationSupporting Information
Supporting Information Olsman and Goentoro 10.1073/pnas.1601791113 SI Materials Analysis of the Sensitivity and Error Funtions. We now define the sensitivity funtion Sð, «0 Þ, whih summarizes the steepness
More informationAn Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances
An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory
More informationWave Propagation through Random Media
Chapter 3. Wave Propagation through Random Media 3. Charateristis of Wave Behavior Sound propagation through random media is the entral part of this investigation. This hapter presents a frame of referene
More informationn n=1 (air) n 1 sin 2 r =
Physis 55 Fall 7 Homework Assignment #11 Solutions Textbook problems: Ch. 7: 7.3, 7.4, 7.6, 7.8 7.3 Two plane semi-infinite slabs of the same uniform, isotropi, nonpermeable, lossless dieletri with index
More informationScalable system level synthesis for virtually localizable systems
Salable system level synthesis for virtually loalizable systems Nikolai Matni, Yuh-Shyang Wang and James Anderson Abstrat In previous work, we developed the system level approah to ontroller synthesis,
More informationFINITE WORD LENGTH EFFECTS IN DSP
FINITE WORD LENGTH EFFECTS IN DSP PREPARED BY GUIDED BY Snehal Gor Dr. Srianth T. ABSTRACT We now that omputers store numbers not with infinite preision but rather in some approximation that an be paed
More informationELECTROMAGNETIC WAVES
ELECTROMAGNETIC WAVES Now we will study eletromagneti waves in vauum or inside a medium, a dieletri. (A metalli system an also be represented as a dieletri but is more ompliated due to damping or attenuation
More informationThe Laws of Acceleration
The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the
More informationAn I-Vector Backend for Speaker Verification
An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,
More informationc-perfect Hashing Schemes for Binary Trees, with Applications to Parallel Memories
-Perfet Hashing Shemes for Binary Trees, with Appliations to Parallel Memories (Extended Abstrat Gennaro Cordaso 1, Alberto Negro 1, Vittorio Sarano 1, and Arnold L.Rosenberg 2 1 Dipartimento di Informatia
More informationMAC Calculus II Summer All you need to know on partial fractions and more
MC -75-Calulus II Summer 00 ll you need to know on partial frations and more What are partial frations? following forms:.... where, α are onstants. Partial frations are frations of one of the + α, ( +
More informationEE 321 Project Spring 2018
EE 21 Projet Spring 2018 This ourse projet is intended to be an individual effort projet. The student is required to omplete the work individually, without help from anyone else. (The student may, however,
More informationF = F x x + F y. y + F z
ECTION 6: etor Calulus MATH20411 You met vetors in the first year. etor alulus is essentially alulus on vetors. We will need to differentiate vetors and perform integrals involving vetors. In partiular,
More informationBilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem
Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan
More informationProperties of Quarks
PHY04 Partile Physis 9 Dr C N Booth Properties of Quarks In the earlier part of this ourse, we have disussed three families of leptons but prinipally onentrated on one doublet of quarks, the u and d. We
More informationCSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM
CSC2515 Winter 2015 Introdu3on to Mahine Learning Leture 5: Clustering, mixture models, and EM All leture slides will be available as.pdf on the ourse website: http://www.s.toronto.edu/~urtasun/ourses/csc2515/
More informationMOLECULAR ORBITAL THEORY- PART I
5.6 Physial Chemistry Leture #24-25 MOLECULAR ORBITAL THEORY- PART I At this point, we have nearly ompleted our rash-ourse introdution to quantum mehanis and we re finally ready to deal with moleules.
More informationSearching All Approximate Covers and Their Distance using Finite Automata
Searhing All Approximate Covers and Their Distane using Finite Automata Ondřej Guth, Bořivoj Melihar, and Miroslav Balík České vysoké učení tehniké v Praze, Praha, CZ, {gutho1,melihar,alikm}@fel.vut.z
More informationAverage Rate Speed Scaling
Average Rate Speed Saling Nikhil Bansal David P. Bunde Ho-Leung Chan Kirk Pruhs May 2, 2008 Abstrat Speed saling is a power management tehnique that involves dynamially hanging the speed of a proessor.
More informationOrdered fields and the ultrafilter theorem
F U N D A M E N T A MATHEMATICAE 59 (999) Ordered fields and the ultrafilter theorem by R. B e r r (Dortmund), F. D e l o n (Paris) and J. S h m i d (Dortmund) Abstrat. We prove that on the basis of ZF
More informationOn the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION
5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.
More informationQUANTUM MECHANICS II PHYS 517. Solutions to Problem Set # 1
QUANTUM MECHANICS II PHYS 57 Solutions to Problem Set #. The hamiltonian for a lassial harmoni osillator an be written in many different forms, suh as use ω = k/m H = p m + kx H = P + Q hω a. Find a anonial
More informationNormative and descriptive approaches to multiattribute decision making
De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of
More informationHOW TO FACTOR. Next you reason that if it factors, then the factorization will look something like,
HOW TO FACTOR ax bx I now want to talk a bit about how to fator ax bx where all the oeffiients a, b, and are integers. The method that most people are taught these days in high shool (assuming you go to
More information1 sin 2 r = 1 n 2 sin 2 i
Physis 505 Fall 005 Homework Assignment #11 Solutions Textbook problems: Ch. 7: 7.3, 7.5, 7.8, 7.16 7.3 Two plane semi-infinite slabs of the same uniform, isotropi, nonpermeable, lossless dieletri with
More informationSubject: Introduction to Component Matching and Off-Design Operation % % ( (1) R T % (
16.50 Leture 0 Subjet: Introdution to Component Mathing and Off-Design Operation At this point it is well to reflet on whih of the many parameters we have introdued (like M, τ, τ t, ϑ t, f, et.) are free
More informationArithmetic Circuits. Comp 120, Spring 05 2/10 Lecture. Today s BIG Picture Reading: Study Chapter 3. (Chapter 4 in old book)
omp 2, pring 5 2/ Leture page Arithmeti iruits Didn t I learn how to do addition in the seond grade? UN ourses aren t what they used to be... + Finally; time to build some serious funtional bloks We ll
More informationSolutions for Math 225 Assignment #2 1
Solutions for Math 225 Assignment #2 () Determine whether W is a subspae of V and justify your answer: (a) V = R 3, W = {(a,, a) : a R} Proof Yes For a =, (a,, a) = (,, ) W For all (a,, a ), (a 2,, a 2
More informationVelocity Addition in Space/Time David Barwacz 4/23/
Veloity Addition in Spae/Time 003 David arwaz 4/3/003 daveb@triton.net http://members.triton.net/daveb Abstrat Using the spae/time geometry developed in the previous paper ( Non-orthogonal Spae- Time geometry,
More informationConformal Mapping among Orthogonal, Symmetric, and Skew-Symmetric Matrices
AAS 03-190 Conformal Mapping among Orthogonal, Symmetri, and Skew-Symmetri Matries Daniele Mortari Department of Aerospae Engineering, Texas A&M University, College Station, TX 77843-3141 Abstrat This
More informationWeighted K-Nearest Neighbor Revisited
Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat
More informationFrequency hopping does not increase anti-jamming resilience of wireless channels
Frequeny hopping does not inrease anti-jamming resiliene of wireless hannels Moritz Wiese and Panos Papadimitratos Networed Systems Seurity Group KTH Royal Institute of Tehnology, Stoholm, Sweden {moritzw,
More informationProbabilistic and nondeterministic aspects of Anonymity 1
MFPS XX1 Preliminary Version Probabilisti and nondeterministi aspets of Anonymity 1 Catusia Palamidessi 2 INRIA and LIX Éole Polytehnique, Rue de Salay, 91128 Palaiseau Cedex, FRANCE Abstrat Anonymity
More information