What would be a reasonable choice of the quantization step Δ?
|
|
- Carol Blair
- 6 years ago
- Views:
Transcription
1 CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean of 0 Volts and varance of 4 Volts. What s the t rate of the quantzed sgnal? 100 Kts/s What would e a reasonale choce of the quantzaton step Δ? For example, we could choose X m 4 x 8 Volts. Then, Δ X m / Volts What s the power of the quntzaton error? (Assume that the hgh rate hypothess holds). e Δ Volts What s the resultng quantzaton SNR? SNR db Wth your choce of Δ, what s the proalty that a sample s n the overload zone? Ths s P( x >4 x ). For a Gaussan random varale ths can e computed usng the error functon and s equal to EXERCISE. Consder a random varale x wth pdf unform n [-1,1]. Suppose you perform quantzaton wth 3 ts usng a mdrse quantzer. Compute the theoretcal varance of the quantzaton nose, as dvded nto granular and overload zone, choosng X m 0.5, X m 1, and X m. Addtonally, compute the resultng quantzaton SNR. Consder frst the case X m 1. In ths case, the proalty of eng n the overload zone s 0, hence the error s only due to the granular zone. In ths case, 4 Δ 1 e x Δ Δ dx 4 Δ3 ( 1 1 Δ3 3. Snce Δ X m/ 3 1/4, we otan )Δ e Snce the varance of the sgnal s /11/3, we have:
2 SNR10log x db. e f x (x) e(x) For the case X m 0.5 we wll need to also consder the overload zone. Snce Δ X m / 3 1/8, the granular zone gves an error of e The error n the 1536 overload zone can e computed as: e,ol 1 1 x 4Δ Δ 1 dx x 7 9 /16 dx x dx Overall, the 4 Δ 16 1/ 1/ error varance s almost 0.06, whch s much larger (more than 10 tmes) than efore. Ths s due to the overwhelmng overload zone. The SNR s equal to 7.4 db. f x (x) X m 1 e(x) X m 0.5 In the case X m, we have Δ X m / 3 1/. Hence, for x>0, only quantzaton levels are n the area where the varale has non-null proalty. There s no overload error, ut we expect a larger granular error. We need to modfy the equaton for the error varance as follows: e 1 1 Δ ( )Δ x Δ Δ dx Δ3 1 Δ tmes larger than n the case X m 1. Now, SNR1 db , whch s aout 4
3 f x (x) e(x) X m EXERCISE 3. Consder a sgnal wth non-unform margnal dstruton, whch we need to quantze wth 8 ts. Suppose that the optmal quantzaton thresholds are (for x>0) 0.01 (assume that the pdf of the sgnal s symmetrc). Fnd a compoundng functon g(x) such that the compounded sgnal can e quantzed usng a unform quantzer. Ths would e any monotone functon such that (for x>0), g( )g( )(-1)Δ for any choce of Δ. For example, g(x)log ( x /0.01) sgn(x) (g(0)0) would do the trck wth Δ1. EXERCISE 4. (GRADS) Prove that, under the hypothess of hgh rate, the optmal choce of value y for the nterval [-1, ] s the mdpont: (-1 + )/.
4 The optmal value of y s y xf x ( x)dx. In the hgh rate case we assume that f x (x) s ( x)dx 1 f x constant wthn [-1, ]. Let f e such constant value. Then, y 1 1 xf dx f dx 1 1 xdx dx ( ) + Prove that for an optmal quantzer, the quantzaton error has mean equal to 0. Snce y s optmal, t s y E[ e] E[ x y ] E x xf x ( x)dx xf x ( x)dx. Now, wthn [-1, ]: ( x)dx 1 f x 1 xf x ( x)dx xf x ( x)dx f x ( x ) x dx f x ( x)dx f x ( x)dx xf x ( x)dz f x ( x)dx 1 xf x ( x)dx xf x ( x)dx 0 ( x)dx f x EXERCISE 5. (GRADS) 1 x, x 1 Consder a varale x wth the followng trangular pdf: f x ( x). 0, x >1 Fnd a compoundng functon g(x) that transform x nto a unform random varale. Let zg(x). Then, f z (g(x)) f x (x)/ g(x). We want f z (z) to e unform (constant) for all ponts z such that g -1 (z) 1. In other words, wthn ths nterval, t must e g(x) f x (x)/c, where C s a constant. By ntegraton, and forcng g(0)0 and C1, we otan for -1<x<1: g( x) x x sgn(x), whch gves a varale zg(x) unform n [-0.5,0.5]. Suppose you quantze the transformed varale wth a unform quantzer wth 3 ts wth no overload zone. What are the correspondng (non-unform) quantzaton thresholds,x for the orgnal varale? 1
5 We need to fnd the nverse of g(x): for -1<x<1, g 1 ( z) 1 1 z sgn(z). For z>0, the quantzaton thresholds are,z -3, and the correspondng thresholds for x are thus: 1,x 0.134;,x 0.9; 3,x 0.5; 4,x 1. EXERCISE 6. Consder a sgnal x(n), sampled at F100 Hz, and suppose you quantze t usng (1) scalar quantzaton (4 ts per sample) and () vector quantzaton (quantzng vector of samples and assgnng 8 ts per vector). 1. Compute the t rate n the two cases. It s the same (400 ts/s). Prove that the expected quadratc quantzaton error usng vector quantzaton cannot e hgher than n the scalar quantzaton case (assumng that the scalar and the vector quantzer are optmal). It s ecause, gven an optmal scalar quantzer wth nterval set B{ }, you can always construct an dentcal separale vector quantzer, defned y BxB (.e., wth assgnment regons of the type [-1, ]x[ j-1, j ]). Hence, the error of the optmal vector quantzer s at most as large as the error of ths vector quantzer, whch s dentcal to the error of the scalar quantzer. EXERCISE 7. Consder a -D vector quantzer wth y 1 (1,), y (1,4), y 3 (-1,), y 4 (0,-). 1. Show wth a graph the optmal assgnment regons {V }. y y 3 y 1 y 4. Quantze and compute the emprcal quadratc error for the followng sgnal, assumng that you quantze groups of two samples at a tme: x{ } [-4-3] y 4 [0,-] (e 17) [- -1] y 4 [0,-] (e 5) [0 1] y 1 [1,] (e ) (same error s otaned wth y 3 ) [ 3] y 1 [1,] (e ) (same error s otaned wth y )
6 [4 5] y [1,4] (e 10) EXERCISE 8. (GRADS) 1. Prove that at each step of the LBG algorthm to desgn a vector quantzer the expected quadratc norm of the error E[ e ] ( x y ) V f x ( x)dx can never ncrease. (Rememer that the LBG algorthm can e used when the jont pdf f x (x) of the sgnal s known). At each step of the LBG algorthm, we mnmze the expected quadratc error, ether over the set of {V } (keepng the {y } constant) or over the set of {y } (keepng the {V } constant). Ovously, the error can never ncrease. E.g., suppose that at a certan pont we have chosen a certan set {V } and a certan set of {y }, whch gves an expected quadratc error of e. Now we fnd the {y } that mnmze the expected quadratc error for fxed {V }. The error cannot e larger than e otherwse, we may just keep the prevous {y }!. Prove that at each step of the k-mean algorthm to desgn a vector quantzer, the sample mean of the quadratc norm of the error ( x k y ) can never xk V ncrease. (In ths case, we start from a tranng sample {x k }). Same as efore, only that now, at each step of the algorthm, we mnmze ( x k y ), ether over {V } or over {y }. xk V EXERCISE 9. (GRADS - OPTIONAL) We want to desgn a quantzer wth ts for an exponental random varale wth, such that 0 0 and 4. Gven the followng choce of s: [0, 0., 0.6, 0.8, ], fnd the optmal choce of y s y 1 1 x 1 x xe x e d + e x d x (ntegraton y parts) 1 e d e x e 1 e 1 + e e 1 e e e 1 Hence, y [0.0983, , ,.8000]. e 1 e 1 e 1 1 +
7 Gven the set of y s gven y your answer, fnd the optmal set of s y + y +1 (except for 0 and 4 that are fxed). Hence, [0, 0.458, , 1.749]. Now terate, alternatng etween the desgn of the y s and of the s, tll convergence. Ths s the generalzed Lloyd s method for optmal quantzer desgn. Iteratng, I otaned the followng optmal values: : [0, , 3.543, , ] y : [0.660,.3560, , ]. f x (x) x
Lossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationLec 07 Transforms and Quantization II
Outlne CS/EE 559 / ENG 4 Specal Topcs (784, 785, 78) Lec 7 Transforms and Quantzaton II Lecture 6 Re-Cap Scalar Quantzaton Vector Quantzaton Zhu L Course We: http://l.we.umkc.edu/lzhu/teachng/6sp.vdeo-communcaton/man.html
More informationRichard Socher, Henning Peters Elements of Statistical Learning I E[X] = arg min. E[(X b) 2 ]
1 Prolem (10P) Show that f X s a random varale, then E[X] = arg mn E[(X ) 2 ] Thus a good predcton for X s E[X] f the squared dfference s used as the metrc. The followng rules are used n the proof: 1.
More informationDifferentiating Gaussian Processes
Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationwhere I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).
11.4.1 Estmaton of Multple Regresson Coeffcents In multple lnear regresson, we essentally solve n equatons for the p unnown parameters. hus n must e equal to or greater than p and n practce n should e
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationWhy Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability
Introducton to Monte Carlo Method Kad Bouatouch IRISA Emal: kad@rsa.fr Wh Monte Carlo Integraton? To generate realstc lookng mages, we need to solve ntegrals of or hgher dmenson Pel flterng and lens smulaton
More informationChapter 8 SCALAR QUANTIZATION
Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar
More informationThe Schrödinger Equation
Chapter 1 The Schrödnger Equaton 1.1 (a) F; () T; (c) T. 1. (a) Ephoton = hν = hc/ λ =(6.66 1 34 J s)(.998 1 8 m/s)/(164 1 9 m) = 1.867 1 19 J. () E = (5 1 6 J/s)( 1 8 s) =.1 J = n(1.867 1 19 J) and n
More information8.1 Arc Length. What is the length of a curve? How can we approximate it? We could do it following the pattern we ve used before
.1 Arc Length hat s the length of a curve? How can we approxmate t? e could do t followng the pattern we ve used before Use a sequence of ncreasngly short segments to approxmate the curve: As the segments
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationLearning Theory: Lecture Notes
Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be
More informationHowever, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values
Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationSTAT 511 FINAL EXAM NAME Spring 2001
STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationModule 2. Random Processes. Version 2 ECE IIT, Kharagpur
Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be
More informationScalar and Vector Quantization
Scalar and Vector Quantzaton Máro A. T. Fgueredo, Departamento de Engenhara Electrotécnca e de Computadores, Insttuto Superor Técnco, Lsboa, Portugal maro.fgueredo@tecnco.ulsboa.pt November 207 Quantzaton
More informationFirst Year Examination Department of Statistics, University of Florida
Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve
More informationCS-433: Simulation and Modeling Modeling and Probability Review
CS-433: Smulaton and Modelng Modelng and Probablty Revew Exercse 1. (Probablty of Smple Events) Exercse 1.1 The owner of a camera shop receves a shpment of fve cameras from a camera manufacturer. Unknown
More informationFinding Dense Subgraphs in G(n, 1/2)
Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationMulti-dimensional Central Limit Theorem
Mult-dmensonal Central Lmt heorem Outlne ( ( ( t as ( + ( + + ( ( ( Consder a sequence of ndependent random proceses t, t, dentcal to some ( t. Assume t = 0. Defne the sum process t t t t = ( t = (; t
More informationStatistics Chapter 4
Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment
More informationMATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1
MATH 5707 HOMEWORK 4 SOLUTIONS CİHAN BAHRAN 1. Let v 1,..., v n R m, all lengths v are not larger than 1. Let p 1,..., p n [0, 1] be arbtrary and set w = p 1 v 1 + + p n v n. Then there exst ε 1,..., ε
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationDigital Signal Processing
Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationAsymptotic Quantization: A Method for Determining Zador s Constant
Asymptotc Quantzaton: A Method for Determnng Zador s Constant Joyce Shh Because of the fnte capacty of modern communcaton systems better methods of encodng data are requred. Quantzaton refers to the methods
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and
More informationPulse Coded Modulation
Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal
More informationSELECTED PROOFS. DeMorgan s formulas: The first one is clear from Venn diagram, or the following truth table:
SELECTED PROOFS DeMorgan s formulas: The frst one s clear from Venn dagram, or the followng truth table: A B A B A B Ā B Ā B T T T F F F F T F T F F T F F T T F T F F F F F T T T T The second one can be
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationPower Allocation for Distributed BLUE Estimation with Full and Limited Feedback of CSI
Power Allocaton for Dstrbuted BLUE Estmaton wth Full and Lmted Feedback of CSI Mohammad Fanae, Matthew C. Valent, and Natala A. Schmd Lane Department of Computer Scence and Electrcal Engneerng West Vrgna
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationPROBABILITY PRIMER. Exercise Solutions
PROBABILITY PRIMER Exercse Solutons 1 Probablty Prmer, Exercse Solutons, Prncples of Econometrcs, e EXERCISE P.1 (b) X s a random varable because attendance s not known pror to the outdoor concert. Before
More informationGeorgia Tech PHYS 6124 Mathematical Methods of Physics I
Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends
More informationExpectation propagation
Expectaton propagaton Lloyd Ellott May 17, 2011 Suppose p(x) s a pdf and we have a factorzaton p(x) = 1 Z n f (x). (1) =1 Expectaton propagaton s an nference algorthm desgned to approxmate the factors
More informationError Probability for M Signals
Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora
prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationA 2D Bounded Linear Program (H,c) 2D Linear Programming
A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded
More informationStatistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )
Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton
More informationMulti-dimensional Central Limit Argument
Mult-dmensonal Central Lmt Argument Outlne t as Consder d random proceses t, t,. Defne the sum process t t t t () t (); t () t are d to (), t () t 0 () t tme () t () t t t As, ( t) becomes a Gaussan random
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationACTM State Calculus Competition Saturday April 30, 2011
ACTM State Calculus Competton Saturday Aprl 30, 2011 ACTM State Calculus Competton Sprng 2011 Page 1 Instructons: For questons 1 through 25, mark the best answer choce on the answer sheet provde Afterward
More informationQuantifying Uncertainty
Partcle Flters Quantfyng Uncertanty Sa Ravela M. I. T Last Updated: Sprng 2013 1 Quantfyng Uncertanty Partcle Flters Partcle Flters Appled to Sequental flterng problems Can also be appled to smoothng problems
More informationV.C The Niemeijer van Leeuwen Cumulant Approximation
V.C The Nemejer van Leeuwen Cumulant Approxmaton Unfortunately, the decmaton procedure cannot be performed exactly n hgher dmensons. For example, the square lattce can be dvded nto two sublattces. For
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationSimulation and Random Number Generation
Smulaton and Random Number Generaton Summary Dscrete Tme vs Dscrete Event Smulaton Random number generaton Generatng a random sequence Generatng random varates from a Unform dstrbuton Testng the qualty
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationComplex Numbers Alpha, Round 1 Test #123
Complex Numbers Alpha, Round Test #3. Wrte your 6-dgt ID# n the I.D. NUMBER grd, left-justfed, and bubble. Check that each column has only one number darkened.. In the EXAM NO. grd, wrte the 3-dgt Test
More informationfind (x): given element x, return the canonical element of the set containing x;
COS 43 Sprng, 009 Dsjont Set Unon Problem: Mantan a collecton of dsjont sets. Two operatons: fnd the set contanng a gven element; unte two sets nto one (destructvely). Approach: Canoncal element method:
More information8.592J: Solutions for Assignment 7 Spring 2005
8.59J: Solutons for Assgnment 7 Sprng 5 Problem 1 (a) A flament of length l can be created by addton of a monomer to one of length l 1 (at rate a) or removal of a monomer from a flament of length l + 1
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationChapter 7 Channel Capacity and Coding
Chapter 7 Channel Capacty and Codng Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform
More informationModule 14: THE INTEGRAL Exploring Calculus
Module 14: THE INTEGRAL Explorng Calculus Part I Approxmatons and the Defnte Integral It was known n the 1600s before the calculus was developed that the area of an rregularly shaped regon could be approxmated
More informationx yi In chapter 14, we want to perform inference (i.e. calculate confidence intervals and perform tests of significance) in this setting.
The Practce of Statstcs, nd ed. Chapter 14 Inference for Regresson Introducton In chapter 3 we used a least-squares regresson lne (LSRL) to represent a lnear relatonshp etween two quanttatve explanator
More informationChapter 3 Describing Data Using Numerical Measures
Chapter 3 Student Lecture Notes 3-1 Chapter 3 Descrbng Data Usng Numercal Measures Fall 2006 Fundamentals of Busness Statstcs 1 Chapter Goals To establsh the usefulness of summary measures of data. The
More informationLecture 3: Probability Distributions
Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the
More informationHere is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)
Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,
More informationStatistical tables are provided Two Hours UNIVERSITY OF MANCHESTER. Date: Wednesday 4 th June 2008 Time: 1400 to 1600
Statstcal tables are provded Two Hours UNIVERSITY OF MNCHESTER Medcal Statstcs Date: Wednesday 4 th June 008 Tme: 1400 to 1600 MT3807 Electronc calculators may be used provded that they conform to Unversty
More informationBézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0
Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationNotes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology
Inverse transformatons Generaton of random observatons from gven dstrbutons Assume that random numbers,,, are readly avalable, where each tself s a random varable whch s unformly dstrbuted over the range(,).
More informationResource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis
Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationChapter 7 Channel Capacity and Coding
Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty Contents 7. Channel models and channel capacty 7.. Channel models
More informationSection 3.6 Complex Zeros
04 Chapter Secton 6 Comple Zeros When fndng the zeros of polynomals, at some pont you're faced wth the problem Whle there are clearly no real numbers that are solutons to ths equaton, leavng thngs there
More informationBose (1942) showed b t r 1 is a necessary condition. PROOF (Murty 1961): Assume t is a multiple of k, i.e. t nk, where n is an integer.
Resolvable BIBD: An ncomplete bloc desgn n whch each treatment appears r tmes s resolvable f the blocs can be dvded nto r groups such that each group s a complete replcaton of the treatments (.e. each
More informationMath1110 (Spring 2009) Prelim 3 - Solutions
Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.
More informationThe Concept of Beamforming
ELG513 Smart Antennas S.Loyka he Concept of Beamformng Generc representaton of the array output sgnal, 1 where w y N 1 * = 1 = w x = w x (4.1) complex weghts, control the array pattern; y and x - narrowband
More informationPattern Classification (II) 杜俊
attern lassfcaton II 杜俊 junu@ustc.eu.cn Revew roalty & Statstcs Bayes theorem Ranom varales: screte vs. contnuous roalty struton: DF an DF Statstcs: mean, varance, moment arameter estmaton: MLE Informaton
More information