Monte Carlo Simulation and Generation of Random Numbers

Similar documents
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Simulation and Random Number Generation

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Modeling and Simulation NETW 707

Composite Hypotheses testing

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding

Error Probability for M Signals

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Chapter 11: Simple Linear Regression and Correlation

Notes prepared by Prof Mrs) M.J. Gholba Class M.Sc Part(I) Information Technology

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

Numerical Heat and Mass Transfer

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Convergence of random processes

Lecture 3: Shannon s Theorem

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

Lecture Notes on Linear Regression

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

IRO0140 Advanced space time-frequency signal processing

Linear Approximation with Regularization and Moving Least Squares

Markov Chain Monte Carlo Lecture 6

RELIABILITY ASSESSMENT

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

18.1 Introduction and Recap

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Appendix B: Resampling Algorithms

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Probability Theory (revisited)

Probability and Random Variable Primer

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

Signal space Review on vector space Linear independence Metric space and norm Inner product

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

VQ widely used in coding speech, image, and video

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

NUMERICAL DIFFERENTIATION

Time-Varying Systems and Computations Lecture 6

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

More metrics on cartesian products

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Classification as a Regression Problem

Introduction to Algorithms

Note 10. Modeling and Simulation of Dynamic Systems

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

First Year Examination Department of Statistics, University of Florida

arxiv:cs.cv/ Jun 2000

Lecture 3: Probability Distributions

Limited Dependent Variables

Digital Signal Processing

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

CHAPTER 4 SPEECH ENHANCEMENT USING MULTI-BAND WIENER FILTER. In real environmental conditions the speech signal may be

Statistics for Economics & Business

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

/ n ) are compared. The logic is: if the two

Uncertainty in measurements of power and energy on power networks

Math Review. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

The Geometry of Logit and Probit

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Identification of Linear Partial Difference Equations with Constant Coefficients

Basic Statistical Analysis and Yield Calculations

Notes on Frequency Estimation in Data Streams

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

An Application of Fuzzy Hypotheses Testing in Radar Detection

Pulse Coded Modulation

( ) = ( ) + ( 0) ) ( )

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Negative Binomial Regression

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors

x i1 =1 for all i (the constant ).

CS-433: Simulation and Modeling Modeling and Probability Review

Singular Value Decomposition: Theory and Applications

Comparison of Regression Lines

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Chapter 13: Multiple Regression

Polynomial Regression Models

Lecture 4 Hypothesis Testing

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Errors for Linear Systems

DS-GA 1002 Lecture notes 5 Fall Random processes

( ) ( ) ( ) ( ) STOCHASTIC SIMULATION FOR BLOCKED DATA. Monte Carlo simulation Rejection sampling Importance sampling Markov chain Monte Carlo

Simulation and Probability Distribution

4DVAR, according to the name, is a four-dimensional variational method.

Quantifying Uncertainty

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Report on Image warping

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Transcription:

S-7.333 Postgraduate Course n Radocommuncatons Sprng 000 Monte Carlo Smulaton and Generaton of Random umbers Dmtr Foursov Dmtr.Foursov@noka.com

Contents. Introducton. Prncple of Monte Carlo Smulaton 3. Random umber Generaton 4 3.. Generaton of Unform Random umbers 5 3.. Methods of Generatng Random umbers from an Arbtrary pdf 6 3.3. Generatng Gaussan Random Varables 7 3.3.. Sum-of- Method 7 3.3.. Box-Muller Method 7 3.4. Generatng Independent Random Sequence 7 3.4.. Whte Gaussan ose 7 3.4.. Random Bnary Sequences and Random Bnary Waveforms 8 3.4.3. Pseudorandom Bnary Sequences 8 3.4.4. M-ary Pseudonose Sequences 9 4. Generaton of Correlated Random Sequences 9 4.. Correlated Gaussan Sequences 9 4... Autoregressve and Movng Average (ARMA) Models 0 4... Spectral Factorzaton Method 0 5. Testng of Random umber Generators 5.. Statonarty and Uncorrelatedness 6. Summary. Introducton Smulaton of a communcaton system requres the generaton of sampled values of all the nput waveforms, processng these samples through models of functonal blocks n the system, and estmatng system performance from the output samples at varous ponts n the model. The focus of the presentaton s on how to generate sampled values of these nput waveforms.. Prncple of Monte Carlo Smulaton Mechancal random number generators were frst used to smulate games of chance, and smulaton technques usng random number generators bear the name of the cty of Monte Carlo, the home of the world famous casno. Whle there are many varatons of the Monte Carlo technque, t bascally nvolves the smulaton of a random experment usng artfcal means,.e., wthout fully repeatng the underlyng physcal experment. In the context of estmatng the bt error rate n a communcaton system, we can defne Monte Carlo smulaton as follows. Wth respect to fg., we are gven a model of a communcaton system and a descrpton of the nput sgnals U(t), V(t), and W(t), whch are assumed to random processes. Our objectve s to fnd the statstcal propertes of Y(t) or the expected value of some functon g(y(t)) of Y(t). If we solve ths problem by emulatng the system ncludng the tme evoluton of all the waveforms n the system, then we have a pure Monte Carlo smulaton. Ths

3 mples generatng sampled values of all the nput processes, lettng the models of functonal blocks n the communcaton system operate on them, and observng the output waveforms. Ideally the Monte Carlo smulaton s a one-to-one correspondence wth the real system wthn the lmts of modelng assumptons and approxmaton. U(t) V(t) W(t) Model of a Communcaton System Y(t) Fg.. Defnton of Monte Carlo smulaton The expected value E{g(Y(t))} s estmated from MC smulaton accordng to E{ g( Yˆ( t))} = = g( Y ( )) where the caret ndcates estmated value and s the number of samples smulated. In the context of the specfc example shown n the fgure, Monte Carlo smulaton for estmatng the bt error rate n a dgtal communcaton system nvolves the followng steps:. Generate sampled values of the nput bt sequence {A(k)}, k =,,, and nose samples {(j)}, j =,,, m (the samplng rate s m samples per bt).. Process these samples through the models of the functonal blocks and generate output sequence Y(k). 3. Estmate E(g(Y(k))) as Pˆ e = k = g( Y ( k)) where g(y(k)) = f Y(k) A(k) and g(y(k)) = 0 f Y(k) = A(k).

4 Bnary PAM Modulator X(j) Flter Y(j) onlnear Amplfer U(j) Channel Sample & Decde W(j) Flter V(j) Compare & Count Errors Pˆ e = k = g( Y ( k)) Fg.. Smulaton model of a communcaton system The accuracy of estmates obtaned va MC smulaton wll depend on the estmaton procedure used, sample sze, the ablty to reproduce sampled values of the nput process accurately, and modelng assumptons and approxmatons. ot all smulatons are Monte Carlo or pure Monte Carlo smulatons. Monte Carlo mples that at least one random process s emulated and pure Monte Carlo mples all the nput processes are emulated. In many smulaton applcatons t s not necessary to smulate all the random processes present n the system. In the example shown n fg. the error rate performance of the system wll depend on the nose and the cumulatve dstorton ntroduced by the nonlnearty and the flters. Snce the nose s Gaussan and addtve at the nput, ts net effect at the output of the flter and hence on the decson metrc wll also be addtve and Gaussan. In the absence of dstorton, the effect of AWG can be handled analytcally wthout smulaton. Even wth dstorton, the effect of AWG can stll be characterzed analytcally for a gven value of dstorton. MC smulatons n whch only some nput processes nto the system are smulated explctly whle the effects of other processes are handled usng analytcal technques are called partal MC or quasanalytcal (QA) smulatons. The man advantage of a QA smulaton s that t wll requre the smulaton of a fewer samples than a pure MC smulaton to produce estmates wth the same accuracy. 3. Random umber Generaton The key to mplementng a Monte Carlo smulaton s the generaton of sequences of random numbers whch represent the sampled values of the nput random processes. In modelng and smulaton of random processes often a fundamental assumpton s made: that the underlyng random processes are ergodc. Ergodcty assumpton s the key for applyng smulaton results obtaned from one sample functon of a process to the entre ensemble of the process.

5 Random number generaton on computers starts wth a formula for generatng random numbers that are unformly dstrbuted n the nterval [0, ]. By applyng approprate transformatons to the sequence of unformly dstrbuted random numbers we can generate random numbers wth arbtrary dstrbutons and temporal propertes. 3.. Generaton of Unform Random umbers The preferred approach for generatng random numbers nvolves the use of recursve formulas that are computatonally effcent. One of the popular methods, called the congruental, uses the followng recursve formula: X ( k) = [ αx ( k )] mod M, k =,,K where α s a carefully chosen nteger between and M, and M s a large prme number or an nteger power of a prme number p m. The random number generaton s started by usng an ntal seed value X(0). Ths generator wll produce ntegers that are unformly dstrbuted between and M. If M s very large and α s chosen properly, then the numbers n the sequence wll appear random and the sequence wll have a maxmum perod of M. For generatng random numbers on computers wth a word sze of 3 bts for nteger arthmetc, the followng formula s recommended: X ( k) = [6807X ( k )] mod ( Wchman-Hll algorthm s used to generate sequences wth longer perod. It nvolves lnearly combnng the outputs of dfferent random number generators wth shorter perods. If we sum two perodc waveforms wth perods and, the resultng waveform wll have a perod = lcm(, ). 3 If and are relatvely prme wth respect to each other, then ) = The Wchman-Hll algorthm s based on ths prncple and t combnes the outputs of three random number generators accordng to and X ( n) = 7X ( n ) mod 3069 Y ( n) = 7Y ( n ) mod 30307 Z( n) = 70Z( n ) mod 3033 X ( n) Y ( n) Z( n) U ( n) = + + mod 3069 30307 3033

6 The perod of the sequence produced by ths algorthm s p = 3068 30306 303 7 0. The man advantage s that t s portable (machne-ndependent), and t guarantees maxmum perod by constructon and there s no need to verfy ts perod emprcally, whch s very dffcult to do when perod s very long. The algorthm, ntroduced by Marsagla and Zaman, s a lnear recursve algorthm and t employs longer memory and carry and borrow operaton n the recurson. There are two smlar versons of ths algorthm, the subtract-wth-borrow and the add-wth-carry algorthm. The subtract-wth-borrow algorthm has the form: Z = X r X s C Z f Z X = Z + b f Z 0 f Z 0 C = f Z < 0 The borrow bt C s ntally set to zero and s toggled between 0 and dependng on whether the recurson produces a postve or negatve nteger Z. The parameters that defne ths generator nclude the base b and lags r and s. It s shown that, n order to guarantee maxmum perod M, the constants b, r and s must be chosen such that M = b r b s + s a prme wth b a prmtve root mod M. For 3-bt machnes, b = 3 5, r = 43, and s = wll produce a perod 0 < 0 M.65 0 44 3.. Methods of Generatng Random umbers from an Arbtrary pdf Transform method (analytcal) By applyng a smple transformaton to a unform random varable U, we can generate a random varable Z wth an arbtrary pdf f z (z) as follows: Consder the transformaton Y = F z (Z) where F z (Z) s the cumulatve dstrbuton functon (CDF) of the random varable Z. We can use ths fact to generate Z usng Z = F z - (Y), where Y has a unform pdf:. Generate U unformly dstrbuted n [0, ].. Output Z = F z - (U); Z has the pdf f z (z). If the cdf and the nverse of the cdf of Z can be expressed n closed form, then step n the transform method s easy to mplement. Otherwse both F z (.) and the nverse F z - (.) have to be computed usng numercal methods. Transform method (emprcal) When the nverse transform cannot be expressed n closed form, then the followng emprcal search algorthm can be used to mplement the transform method. If Z s a contnuous random varable, then the dstrbuton s frst quantzed. If p, p,

7, p are probabltes of the cells of the unformly quantzed dstrbuton, then the followng algorthm can be used to generate samples of the random varable Z:. Generate U, unform n [0, ].. Let F = j = p j, = 0,,,,, wth F 0 =0. 3. Fnd the smallest value of that satsfes F - < U F, =,,, 4. Output Z = z - + (U F - )/C and return to step. Besdes these two basc methods the other methods can be used among whch are Transform Method for Dscrete Random Varables, Acceptance/Rejecton Method of Generatng Random umbers, and Modfed Acceptance/Rejecton Method. 3.3. Generatng Gaussan Random Varables 3.3.. Sum-of- Method The smplest method of generatng random numbers wth a standard Gaussan pdf s va the use of the central lmt theorem accordng to the formula Y = U ( k) 6.0 k= where U(k), k =,,,, s a set of ndependent, dentcally dstrbuted random varables wth unform dstrbuton n the nterval [0, ]. The number s tradtonal and represents some compromse between speed and accuracy, but there s no reason to lmt k to. 3.3.. Box-Muller Method It s well-known that f X and Y are ndependent zero-mean Gaussan varables, then R = (X + Y ) / and θ = tan - {Y/X} have Raylegh and unform pdfs, respectvely. Ths fact can be used to generate two samples of a Gaussan varable. If U and U are two ndependent varables unformly dstrbuted n the nterval, then X = [ ln( U )] cos(πu Y = [ ln( U)] sn(πu ) are ndependent Gaussan random varables wth mean zero and varance. ) 3.4. Generatng Independent Random Sequence 3.4.. Whte Gaussan ose Whte nose has a constant power spectral densty (PSD) for all frequences

8 0 S ( f ) =, for < f < However, practcal systems have fnte bandwdths, B, and the samplng rate for smulaton of f s s chosen to be greater than B. For smulaton purposes, sampled values of Gaussan whte nose are generated. The sampled PSD leads to an autocorrelaton functon wth zero crossng at τ = kt s, T s = / f s. The varance of the samples s 0 f s /. Hence, sampled values of bandlmted whte Gaussan nose can be smulated usng an ndependent sequence of Gaussan varables wth zero mean and varance σ 0 f s s = Such a sequence can be generated by multplyng the output of a zero mean, unt-varance Gaussan random number generator σ. 3.4.. Random Bnary Sequences and Random Bnary Waveforms A random bnary sequence {b k }, b k = 0 or, can be generated from a unform sequence {U k } by b k = 0 f U f U k k > p p 0 where p 0 = P[b k = 0]. 3.4.3. Pseudorandom Bnary Sequences A random bnary sequences conssts of a statstcally ndependent sequence of 0 s and s each occurrng wth probablty of ½. A pseudonose (P) or pseudorandom sequence s a perodc bnary sequence wth an autocorrelaton functon that resembles the autocorrelaton functon of a random bnary sequence. The P sequence s generated usng a feedback shft regster arrangement. The feedback shft regster arrangement conssts of bnary storage elements and feedback logc. A feedback shft regster s called lnear f the feedback logc conssts entrely of modulo adders. An m-stage lnear feedback shft regster generates P sequences accordng to S n = cm Sn cm Sn L cs n m+ c0 S n m where S n s the value of the sequence at tme n, and the coeffcents c are bnaryvalued. Let us assocate wth the coeffcents a polynomal of degree m, P m m X ) = X + cm X m + K+ c X + ( c 0

9 Then t can be shown that the sequence {S n } wll be a maxmal-length sequence f and only f P m (.) s a prmtve polynomal. Very mportant property of P sequences s that a maxmal-length sequence has pattern of all possble m-bt combnatons each of whch appears only once wthn each perod, except for a pattern of m zeros. Ths property s useful for smulatng ntersymbol nterference (ISI). To smulate the effect of ISI, we need to drve the system wth a bnary sequence that has all possble combnatons of m bts, where m s the memory length of the system measured n bt ntervals. 3.4.4. M-ary Pseudonose Sequences Many dgtal communcaton systems use M-ary waveforms (M-ary phase, frequency, and ampltude). To smulate the performance of these systems, we may use M-ary P sequences whch can also be generated usng feedback shft regsters. The generaton of such sequences can be done by a generalzaton of the bnary case. Consder the degree-m monc polynomal P m m X ) = X + am X m + K+ a X + ( a 0 wth coeffcents a n a fnte feld of q elements. Fnte felds are called Galos felds and a fnte feld of q elements s denoted GF(q). In M-ary communcaton we normally use values of M = k. Hence, we want to set q = M = k and develop the arthmetc n GF( k ). 4. Generaton of Correlated Random Sequences In many applcatons there s often the need to generate a random sequence whch has a specfed autocorrelaton functon R(k) or R(τ) or power spectral densty S(f). When R(τ) 0 for τ = kt s, k = ±, ±,, the process s correlated at multples of the samplng nterval and hence samples of the random process wll be correlated. In order to smulate the sampled values of such a process, we need algorthms for generatng correlated sequences of random numbers wth a gven autocorrelaton functon R(kT s ). 4.. Correlated Gaussan Sequences An uncorrelated Gaussan sequence can be transformed nto a correlated Gaussan sequence through a lnear transformaton or flterng whch preserves the Gaussan dstrbuton, but alters the correlaton propertes. The coeffcents of the lnear transformaton can be obtaned from the specfed correlaton functon of the output.

0 4... Autoregressve and Movng Average (ARMA) Models A correlated Gaussan sequence Y(n) can be generated from an uncorrelated Gaussan sequence X(n) usng an autoregressve movng average [ARMA (p, q)] model of the form p = Y ( n) = φ Y ( n ) + θ p q j= qj X ( n j) + X ( n) Autoregressve part Movng average part where X(n) s the nput sequence that s an uncorrelated Gaussan sequence wth zero mean and varance σ X, Y(n) s the output sequence, and φ p, =,,, p and θ qj, j =,,, q, are the parameters of the autoregressve and the movng average parts of the model, respectvely. The model gven s a lnear tme-nvarant dscretetme flter wth a transfer functon H ( f ) + = = q + p = φ θ p q exp( jπf) exp( jπf) and produces an output power spectral densty of the form S YY ( f ) p + = = S XX ( f ) H ( f ) = σ X p + = φ φ p p exp( jπf) exp( jπf) When we mplement an ARMA model we need p ntal values of Y(k) to get the recurson started. If these ntal condtons are chosen arbtrary, then the output wll have a transent n the begnnng and the frst few samples (of the order of 0p) should be gnored. 4... Spectral Factorzaton Method We can also desgn a flter n the frequency doman to transform an uncorrelated Gaussan sequence nto a correlated sequence. When we pass an ndependent Gaussan sequence through a flter, the output Gaussan process wll have a power spectral densty gven by S ( f ) S ( f ) H ( f YY = XX ) In many cases the output spectral densty may be gven n emprcal form or t may be n an analytcal form that s not conductve to factorng. In these cases, one of two approaches can be taken:. Emprcally ft an analytcal form to the gven psd and then apply the spectral factorzaton.

. Drectly synthesze an FIR flter by settng the flter transfer functon to be equal to the square root of the gven psd. 5. Testng of Random umber Generators 5.. Statonarty and Uncorrelatedness The output of RGs should be statonary and uncorrelated. Smple test for statonarty nvolve the followng steps. Generate a long sequence of samples.. Dvde t nto a long sequence nonoverlappng segments and compute the means and varances of each segment. 3. Test for the qualty of the means and varances. The followng procedure can be used to test for uncorrelatedness and perodcty:. Generate a very long sequence.. Dvde the data nto overlappng segments of ponts wth overlap of /. Let X 0, X, X, etc., represent the data vectors. 3. Compute the normalzed cross correlaton between X0 and X 0, X, X, 4. Plot ρ ˆ ( k) and check for peaks. XX Fgure shows some typcal results. Durbn-Watson Test for Correlaton Ths s more rgorous procedure for testng the hypothess that adjacent members of the sequence X(n) are uncorrelated. The test statstc used s D n= = [ X ( n) X ( n )] n= [ X ( n)] If X(n) and X(n-) are uncorrelated then D would have an expected value of. The value of D would be much smaller f there s strong postve correlaton and would approach 4 f there s strong negatve correlaton. 6. Summary Random varables and random processes are used to model sgnals, nose, and nterference n communcaton systems and also for modelng the randomly tmevaryng behavor of components such as certan types of rado communcaton channels. The startng pont for random number generaton s a computatonally effcent algorthm for generatng an uncorrelated sequence of unformly dstrbuted random numbers. The output of such a generator can be used to produce both correlated sequences and arbtrary dstrbutons by applyng lnear and nonlnear transformatons.

Random sequences that represent the output of dgtal sources can be generated usng lnear feedback shft regster arrangement whch s computatonally very effcent. Fg. Example of a correlated sequence. (a) Sampled functon. (b) perodogram of x(t). (c) autocorrelaton of x(t)

Problem. Compare the computatonal effcency of the Gaussan random number generators (Sum-of- and Box-Muller),.e. generate say 0000 samples usng each method and compare the run tmes.