Boris Backovic, B.Eng.

Similar documents
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Few thoughts on PFA, from the calorim etric point of view

Nov Julien Michel

GENETIC ALGORlIHMS WITH AN APPLICATION TO NONLINEAR TRANSPORTATION PROBLEMS

APPLICATION OF AUTOMATION IN THE STUDY AND PREDICTION OF TIDES AT THE FRENCH NAVAL HYDROGRAPHIC SERVICE

Designing the Human Machine Interface of Innovative Emergency Handling Systems in Cars

Image and Multidimensional Signal Processing

Multimedia Networking ECE 599

Pulse-Code Modulation (PCM) :

Digital Systems Roberto Muscedere Images 2013 Pearson Education Inc. 1

Bits. Chapter 1. Information can be learned through observation, experiment, or measurement.

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

Channel Coding and Interleaving

C o r p o r a t e l i f e i n A n c i e n t I n d i a e x p r e s s e d i t s e l f

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Procedures for Computing Classification Consistency and Accuracy Indices with Multiple Categories

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Turbo Codes for xdsl modems

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Large chunks. voids. Use of Shale in Highway Embankments

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

Lecture 12. Block Diagram

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Convolutional Coding LECTURE Overview

Physical Layer and Coding

TIDAL PREDICTION WITH A SMALL PERSONAL COMPUTER

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

The information loss in quantization

CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals

EE5713 : Advanced Digital Communications

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

Review Quantitative Aspects of Networking. Decibels, Power, and Waves John Marsh

Number Representation and Waveform Quantization

F O R M T H R E E K enya C ertificate of Secondary E ducation

HARMONIC VECTOR QUANTIZATION

Information and Entropy

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut

OPTIMISATION PROCESSES IN TIDAL ANALYSIS

Multimedia Systems WS 2010/2011

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

7.1 Sampling and Reconstruction

Digital Signal Processing

Direct-Sequence Spread-Spectrum

A Systematic Description of Source Significance Information

SPEECH ANALYSIS AND SYNTHESIS

BASICS OF COMPRESSION THEORY

Chapter 9 Fundamental Limits in Information Theory

Presented by Arkajit Dey, Matthew Low, Efrem Rensi, Eric Prawira Tan, Jason Thorsen, Michael Vartanian, Weitao Wu.

Example: sending one bit of information across noisy channel. Effects of the noise: flip the bit with probability p.

VID3: Sampling and Quantization

George Mason University ECE 201: Introduction to Signal Analysis Spring 2017

M A T R IX M U L T IP L IC A T IO N A S A T E C H N IQ U E O F P O P U L A T IO N AN A L Y S IS

Basic Principles of Video Coding

of Digital Electronics

Ch 0 Introduction. 0.1 Overview of Information Theory and Coding

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

Digital Image Processing Lectures 25 & 26

A Study of Drink Driving and Recidivism in the State of Victoria Australia, during the Fiscal Years 1992/ /96 (Inclusive)

Modern Digital Communication Techniques Prof. Suvra Sekhar Das G. S. Sanyal School of Telecommunication Indian Institute of Technology, Kharagpur

E303: Communication Systems

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall Quiz II.

DEPARTMENT OF EECS MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 6.02: Digital Communication Systems, Fall Quiz I. October 11, 2012

Designing Information Devices and Systems I Fall 2015 Anant Sahai, Ali Niknejad Final Exam. Exam location: RSF Fieldhouse, Back Left, last SID 6, 8, 9

FROM ANALOGUE TO DIGITAL

CSCI 2570 Introduction to Nanocomputing

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Welcome to Comp 411! 2) Course Objectives. 1) Course Mechanics. 3) Information. I thought this course was called Computer Organization

What are S M U s? SMU = Software Maintenance Upgrade Software patch del iv ery u nit wh ich once ins tal l ed and activ ated prov ides a point-fix for

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

Designing Information Devices and Systems I Spring 2018 Homework 11

BASIC COMPRESSION TECHNIQUES

Author's personal copy

Proc. of NCC 2010, Chennai, India

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

Maximum Likelihood and Maximum A Posteriori Adaptation for Distributed Speaker Recognition Systems

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic Approximation Algorithm

Designing Information Devices and Systems II Spring 2016 Anant Sahai and Michel Maharbiz Midterm 1. Exam location: 1 Pimentel (Odd SID + 61C)

Instructor (Brad Osgood)

Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

Estimation of the Capacity of Multipath Infrared Channels

What is Image Deblurring?

Getting Started with Communications Engineering

Soft-Output Trellis Waveform Coding

Electrical Engineering Written PhD Qualifier Exam Spring 2014

Journal of Geoscience Education, v. 46, n. 3, p , May 1998 (edits, June 2005)

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Designing Information Devices and Systems I Spring 2016 Elad Alon, Babak Ayazifar Homework 11

VHDL Implementation of Reed Solomon Improved Encoding Algorithm

FKSZ2.E Drivers for Light-emitting-diode Arrays, Modules and Controllers - Component

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 2

The Gram-Schmidt Process

Random Signal Transformations and Quantization

Vector Quantization and Subband Coding

UNIT I INFORMATION THEORY. I k log 2

Error Correction and Trellis Coding

Hadamard Codes. A Hadamard matrix of order n is a matrix H n with elements 1 or 1 such that H n H t n = n I n. For example,

I zm ir I nstiute of Technology CS Lecture Notes are based on the CS 101 notes at the University of I llinois at Urbana-Cham paign

Transcription:

SOURCE-CHANNEL CODEC FOR A WCDMA BASED MULTIMEDIA SYSTEM by Boris Backovic, B.Eng. Toronto, September 2005 A Project Report presented to Ryerson University in partial fulfillment of the requirements for the degree Master of Engineering in the Program of Electrical Engineering Toronto, Ontario, Canada, 2005 Boris Backovic 2005 PROPrtrTYQF RYER3QW UBRAFY

UM I Number: E C 53005 All rights reserved INFO RM ATION TO USERS The quality of this reproduction is depent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not s a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. UMI UMI Microform EC53005 Copyright 2008 by ProQuest LLC All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 48106-1346

The Journey of Success W hen choosing the path to follow, I selected the road heading west. It began in the Forest o f Childhood, and ceased at the City o f Success. M y bag w as packed full o f knowledge, but also some fears and some weights. M y m ost precious cargo was a vision o f entering the C ity s bright gates. I reached an im passable river, and feared that my dream had been lost. B ut I found a sharp rock, cut down a tree, and created a bridge, which I crossed. It started to rain, and I was so cold, I shivered and started to doubt. But I made an um brella out o f some leaves and kept all the cold water out. The journey took longer that I had planned; I had no food left in my dish. R ather than starve before reaching my dream, I taught m yself how to fish. I grew aw fully tired as I walked on and on, and I thought o f the weights in my pack. I tossed them aside, and I sped up again. Fear was all that was holding me back. I could see the C ity o f Success, just beyond a small grove o f trees. At last, I thought, 1 have reached my goal! The whole world will envy me! I arrived at the city, but the gate was locked. The man at the door frowned and hissed, Y ou have wasted your tim e. I can t let you in. Y our name is not on my list. I cried and I scream ed and I kicked and 1 shook; 1 felt that my life had ju st ceased. For the first tim e ever, I turned my head, and for once in m y life faced the east. I saw all the things 1 had done on my way, all the obstacles I d overcom e. I couldn t enter the City, but that didn t mean I hadn t won. I had taught m yself how to ford rivers, and how to stay dry in the rain. I had learned how to keep my heart open, even if som etim es it lets in some pain. I learned, facing backwards, that life m eant more than ju st survival. M y success was in m y journey, not in my arrival. Nancy Hammel

Abstract The project deals w ith the operation o f a Source-Channel Codec for a W CDM A Based M ultim edia System. The system is m eant to transfer and receive both digitized speech and still image signals. It uses a part o f the W CDM A technology to mix up the transm itted signals throughout the im plem entation o f Direct Sequence Spread Spectrum and C hip Sequencing m ethodologies. The W alsh code algorithm is used to ensure the orthogonality am ong different Chip Sequences. O n the transm itter side the system first offers the form atting stage where both a speech and a still image signal are digitized. The follow ing stage in the system exhibits a significant degree o f data com pression applying appropriate com pression algorithm s: L em pel-z iv-w elch for the speech signal and Huffman Code Algorithm for the still image. These com pression algorithm s are im plem ented in the Source Encoder stage o f the system. The system also provides basic EEC (Forw ard Error Correction) capabilities, using both Linear Block Code and Convolutional Code algorithm s introduced in the C hannel Encoder stage. The goal o f these EEC algorithm s is to detect and correct errors during the transm ission o f data due to the channel imperfections. At the W CDM A stage the tw o signals are added together form ing an aggregated signal that is being transm itted through the channel. O n the receiver side a digital dem odulator separates the aggregated signal to into two signals using the feature o f the orthogonality o f vectors. Then the Channel D ecoder stage follow s, w here both signals, which have gotten corrupted during the transm ission through the channel due to channel imperfections, are recovered. The imperfections in the channel are sim ulated by random noise that is added to the aggregated signal in the W CDM A stage o f the system. The last stage in the system, the Source Decoder stage, deals with the conversion o f the received signals from the digital to analog form and reconstruction o f the signals in the sense that they can again be heard (speech) and seen (still image). Each stage in the system is sim ulated using M ATLAB program m ing language. The report is form ed o f three m ajor parts; the theoretical part where the theory behind each stage in the system is explained, the exam ple part where applicable numerical exam ples are provided and analyzed for better understanding o f both the theory and the M atlab code, and the result part where the M atlab results for each stage are analyzed.

II Table of Contents Abstract I Chapter 1 : INTRODUCTION 1 Chapter 2: FORMATTING 7 2.1 Speech Form atting 7 2.1.1 Sam pling g 2.1.2 Q uantizing 9 2.1.3 Pulse Code M odulation (PCM ) 13 2.1.4 M atlab Im plem entation o f Form atting 14 2.2 Im age Form atting 19 Chapter 3 : SOURCE CODEC 23 3.1 H uffm an Coding Algorithm 23 3.1.1 H uffm an E ncoding 25 3.1.2 H uffm an D ecoding 28 3.1.3 M atlab Im plem entation o f Huffm an Coding A lgorithm 30 3.2 Lem pel-ziv-w elch Algorithm 32 3.2.1 M atlab Im plem entation o f Lem pel-ziv-w elch A lgorithm. 37 Chapter 4 : CHANNEL CODEC 41 4.1 C hannel C odes 41 4.1.1 Parity Check Codes 41 4.1.2 Linear Block Codes 43 4.1.3 Linear Block Code Encoding 45 4.1.4 Linear Block Code Decoding 46 4.1.5 M atlab Im plem entation o f Linear Block Codes 53 4.2 C onvolutional C odes 54 4.2.1 C onvolutional C ode E ncoding 56 4.2.1.1 Impulse Response o f the Convolutional Encoder 57 4.2.1.2 Polynom ial R epresentation o f C onvolutional E ncoding 58

Ill 4.2.1.3 State D iagram 59 4.2.1.4 Trellis D iagram 60 4.2.2 C onvolutional C ode D ecoding 61 4.2.2.1 V iterbi D ecoding A lgorithm 62 4.2.3 M atlab Im plem entation o f the (2,1,4) Convolutional C ode 66 C hapters: WCDMA 67 5.1 W CD M A Technology 67 5.1.1 D irec t Sequence Spread S pectrum 68 5.1.2 C ode D ivision M ultiple Access 70 5.1.2.1 W alsh O rthogonality 73 5.2 M atlab Im plem entation o f W CD M A 77 Chapter 6: RESULTS 80 Conclusion 92 References 94 Appix A: Matlab Simulation Files 96 Appix B: Matlab Example Files 135

IV List of Figures Figure 1 : Source-Channel Codec for a W CDM A Based M ultim edia System. 3 Figure 2: Frequency Spectrum o f the Signal speech.w av. 15 Figure 3: Signal speech.w av in Time Domain. 16 Figure 4; Grey Level Im age o f a Diagonal Black Line. 20 Figure 5; PD F function o f the Black Diagonal Line. 22 Figure 6; A H uffm an Tree. 26 Figure 7; Single Parity Check Code. 42 Figure 8: R ectangular Parity Code (I = 4, J = 6). 43 Figure 9: D ecoding Table for Linear Block Codes. 48 Figure 10; (2,1,4) C onvolutional Encoder. 55 Figure 11 : T rellis D iagram R epresentation o f (2,1,4) C onvolutional E ncoder. 60 Figure 12: The D ecoder Trellis Diagram (with H am m ing D istances). 64 Figure 13: The D ecoder Trellis Diagram after t=5 Time U nits. 65 Figure 14: D irect Sequence Spread Spectrum. 68 Figure 15: R ight Shifted G enerator Polynom ial. 69 Figure 16: Speech D ata. 80 Figure 17: Still Im age D ata. 80 Figure 18: F requency Spectrum o f Speech Signal. 81 Figure 19: PD F o f Q uantization Error. 81 Figure 20: Frequency o f U sage o f Q uantizer Levels. 83 Figure 21 : PDF o f the Image. 84 Figure 22: B ER vs. SN R for (12,8) Linear Block Code. 88 Figure 23: R eceived Speech D ata. 91 Figure 24: Received Im age. 91

List of Tables Table 1: G enerator Polynom ials for Convolutional Codes. 56 Table 2: Im pulse Response o f the (2,1,4) Encoder. 57 Table 3: State Diagram o f the (2,1,4) Convolutional Encoder. 59 Table 4: H um m ing D istance Used for D ecoding Convolutional Codes. 61

Chapter 1: INTRODUCTION Even though nowadays personal com puters are the dom inant Internet access client, m obile phones and handheld devices will very soon become the m ajor source o f Internet connections. Unlike the first-generation (IG ) o f mobile com m unication systems designed m ostly to carry the voice application traffic, the third-generation (3 0 ) o f com m unication system s prom ises unparalleled access in ways that have never been possible before. Internet access, voice com m unication over Internet protocol, and transm ission o f still and m oving images are just a few o f the com m unication techniques used in the alw ays-on type o f access that 3 0 developers have been working on. These developers envision the picture o f an ordinary user receiving live music, conducting interactive w eb sessions, and having sim ultaneous voice and data access with multiple parties at the sam e time. U ndoubtedly, this kind o f access requires a special technology so that com puters, handheld devices, or any other appropriately geared com m unication device, m ay all be connected anytime, anywhere. W ideband code division m ultiple access (W C D M A ) technology is one o f the main technologies for the im plem entation o f the third-generation o f com m unication system s that allows very high-speed multim edia services to be performed. W CDM A will support high rate high quality data, multimedia, stream ing audio, stream ing video, and broadcast type services am ong users. In addition, W C D M A designers contem plate that broadcasting, mobile com m erce, games, interactive videos, and even virtual private networking will be possible in a near future, all from small portable devices. These new and exiting possibilities o f connecting people, businesses, and even industries all over the world by using rapidly developing technologies and standards make im possible to imagine what modern living would be like w ithout access to reliable, econom ical, and efficient m eans o f com m unication. Consequently, an ordinary person m ay g et easily confused and scared away with concepts o f all these com m unication technologies and their im plem entations. However, the sole purpose o f com m unications has not changed much since the time o f Guglielm o M arconi who, in 1897, had dem onstrated radio s ability to provide continuous contact with ships sailing the English channel. Even today, a com m unication is still a way o f conveying or transm itting inform ation from one place to another. The definition sounds sim ple and does not tell us m uch about w hat is involved in information transm ission as a mean o f com m unication betw een tw o or m ore parties. The answer would probably turn out to be more com plicated then it m ight first appear and will, in fact, form a basis for this project.

A sim ple answ er would be that the transm ission o f inform ation requires some kind o f signals in order to convey a m essage to the other party. The signal could be a voltage, but once the voltage is established there is not much availability to convey information unless w e change the value o f the voltage. The next step would be to attach the voltage (a battery) to a variable resistor creating more variations o f possible voltages that in addition allow s us to associate more parts o f information with different voltage levels. If a signal varies with tim e (120V, 6GHz sinusoidal voltage) then variations o f the signal may be created by either changing the am plitude or the phase o f the sinusoidal voltage. A t this point we will make a rough aberration between two types o f signals that could be used in a process o f inform ation transmission. If a signal is a continuous electrical signal that varies with tim e, it is considered to be an analog signal. On the other hand, if a signal is non-continuous, it is said to be a digital signal. Furthermore, analog signals can take on any value from an infinite set o f values in a specific range while digital signals take on one o f tw o possible am plitude levels called nodes. Digital signals consist o f pulses or digits w ith discrete levels or values. The value o f the signal is specified as one o f two possibilities such as either I or 0, high or low, true or false and so on. In the process o f transm itting information, all signals bearing information are contam inated by noise. Noise is generated by many natural and man-made events that introduce errors during the transm ission and create serious problems for the party receiving the inform ation to be able to properly recover transm itted messages. W hen an analog signal is affected by noise it is much more difficult to regenerate that signal than it is to recover a signal that is o f the digital origin. W hen affected by noise a pulse (representation o f a digital signal) degrades as a function o f line length. Before the pulse is degraded to an am biguous state, it could be am plified by a digital am plifier that recovers the pulse s original shape. In the case o f analog signals, once an analog signal is distorted, the distortion can never be com pletely rem oved by the process o f am plification. B esides being m ore resentful to noise, there are many other features that make a digital signal m ore suitable to convey information com pared to an analog signal. Some o f them are: reliability o f the system, flexibility o f the hardware, and pricing o f the im plem entation. On the other hand, a digital transm ission would typically require a greater system bandwidth to com m unicate the same information in a digital format as com pared to those in an analog format. All this m akes a digital signal to be a signal o f choice for the com m unication when inform ation is to be transmitted in a reliable, robust, and relativ ely not expensive m anner.

Channel A (Speech) FORMATTING: Sampling ^ Quantizing ^ Pulse Code Modulation (PCM) SOURCE ENCODING: % Lempel Ziv Welch Algoiitlun (LZ\\^ CRANNEL ENCODING: 4= Lmeai Block (LBC) ^ Duect Sequence Spied Spectimn (DSSS) WCDM4: Code Division M ultiple Access (CDMA.) m W S N O T T E R Pseudo Random Sequence Geiieiatoi (PRSGl) Z EORE'IATTING: Channel B * PDF (Image) SOURCE ENCODING: Huffman Coding Algoiitlun (HCA) CRANNEL ENCODING: Convolutional Dh ect Sequence Spied Spectimn (DSSS) WCDMA- H' Code Division Midtiple Access (CDMA) SOURCE DECODER: CHANNEL DECODER: Pseudo Random Sequence Genei-atoi (PRSG2) Chamiel A (Speech) 4= Lempel Ziv Welch Algorithm (LZW) Lmeai Block Code (LBC) Dii ect Sequence Spied Spectimn (DSSS) Cliaimel B (lin a g e ) RECELER 4 - SOURCE DECODER: Huffman Codhig Algoridun (HCA) CHANNEL DECODER; Viterbi Algoiitlun T ^ P se u d o Random ( Sequence Generator V (PRSGl) 'H Dhect Sequence Spied Spectimn (DSSS) ik Code Division Multiple Access (CDMA) Pseudo Random Sequence Generator] (PRSG2) Figure 1: Source-Channel Codec for a W CDM A Based M ultim edia System.

This project sim ulates the basic elem ents o f a digital com m unication system shown in Figure 1. Keeping in analogy w ith the introduction given in the beginning o f this report, this system may be considered as a prim itive system o f the third generation (3G) o f com m unication system s. Here are the reasons: a) It is a digital system ; m eaning that regardless o f the format o f the input m essage, the system will conventionalize the message into a stream o f binary digits. An exception applies only in the case when the input message is already in the digital format. This process o f transform ing any form o f input m essage into a stream o f bits (digital form ) is called form atting and will be discussed in details in C hapter 2; b) It is a m ultim edia system; m eaning that it processes more than one type o f m edia at the sam e tim e (speech and still image). The system is tw o channeled, w here channel A is carrying a speech data while channel B is dealing with a still image data. In addition, the system is considered to be a simplex transm ission system. In many cases it is desirable to maintain tw o way com m unications, or at least be able to s a message back to its origin for possible verification, comparison, or control. To simply implem ent this type o f system, called full-duplex, another set o f blocks (stages) exactly the same ones as the ones shown in Figure 1 should be incorporated into the system but in a reverse order; That second set o f blocks would be responsible for the transm ission o f inform ation from the destination back to the source; c) It uses W CD M A technology in the process o f transm itting data from the source to the destination. At this point, it would be beneficial to say that W CD M A is more than Just a technology; it is a standard that establishes and defines various com m unication protocols and procedures used during a com m unication session. In this project m any o f these protocols and procedures will be ignored for the sake o f simplicity. However, one that is considered as the main representative o f W CDM A standard, called M edium A ccess Control (M AC), will be fully presented and simulated. Medium access control defines how digital signals from different sources use the same allocated frequency spectrum to convey different information to different recipients. M AC is discussed in C hapter 5; A fter form atting, the next block in our digital com m unication system is the source encoder stage. The source encoding is the process o f rem oving redundant bits from the

sequence o f bits carrying information. In other words, the source encoding com presses data in a w ay that only necessary hits that make up the original information are processed further through the system. In Chapter 3, two source encoding techniques are dem onstrated. One for the channel A that deals with the source encoding o f a speech signal form atted into a stream o f hits in the previous stage (formatting stage). That source encoding technique is called the Lem pel-ziv-w elch technique. The other technique called H uffm an Coding Algorithm will he used for the channel B to encode the still im age data that is, at this point in the process, already in the binary form. The chapter will also introduce the w ays how data encoded by the two encoding techniques can he decoded on the receiver. Since a source encoder and source decoder generally operate in pairs, this com bination o f a source encoder and decoder is called a source c odec (coder-decoder). The stage follow ing the source encoding stage is the channel encoding stage. The purpose o f the channel encoder is to introduce, in a control m anner, some redundancy in the binary inform ation stream that can he used at the receiver to overcom e the effects o f noise and interference encountered in the transm ission o f the bits through the channel. In other w ords, the system will now add some redundant hits to the binary information for the purpose o f detecting and correcting some o f errors occurred during the transmission. The redundant bits are formed and organized in such a way that no response hack from the receiver is needed in order to detect and correct errors. This type o f error control is called forw ard erro r correction (FEC) and requires a one-w ay link only. In C hapter 4, tw o types o f channel encoding techniques are presented. One, for channel A, called the L inear Block Code technique and the other one, for the channel B, called the C onvolutional Code technique. Along with both techniques their associated decoding procedures, such as Viterbi decoding algorithm for Convolutional codes, will be discussed and im plem ented. Similarly as for a source encoder and decoder, a channel encoder and decoder operates in pairs as well, thus form ing what is called a channel codec. C hapter 5 entirely deals with the access control o f the medium and, as it has earlier been m entioned, presents the way o f m anaging this issue as defined by the W CDM A standard (technology). A digital modulation technique called Direct Sequence Spread Spectrum is presented along w ith the Code Division M ultiple A ccess technique. The role o f each one o f th em in the channelization process is discussed.

The last chapter, Chapter 6, provides an overview o f the results obtained from the sim ulation in such a w ay that the input/output binary sequences to/from each block in the system s are exam ined and discussed. The chapter also gives a conclusion on the project as well as recom m ations on how to improve the efficiency and efficacy o f the system presented and sim ulated in this project. F or the o f this introductory section, 1 would also like to mention one important thing that w ould becom e m ore evident once a reader starts reading through the incoming pages. N um erical exam ples! Throughout my entire education as an undergraduate as well as a graduate student I had always felt that any engineering theory or topic introduced in a class w ould have made much more sense to me if it was accom panied by a relevant exam ple. T hat is why at the o f each chapter in this report an example associated with the algorithm or technique presented in that chapter is worked out. I believe that these exam ples, besides clarifying the theory, will also help a reader to understand how the presented algorithm or technique is im plem ented in Matlab.

Chapter 2: FORMATTING 2.1 Speech Formatting In general, signals in the com m unication theory could be considered either as analog o r digital ones. Signals in digital form are also told to be discrete signals. H ow ever, both types o f signals bear information that is conveyed from the source to the destination through different types o f medium(s) (e.g. air, copper wire, etc.). Besides analog and digital types o f information mentioned above, the data at the source, also called source inform ation, may be found in a textual form. Such kind o f source data is term ed as textual information. In the digital com m unication systems the first and essential step in the process o f conveying inform ation from the source to the destination is to format the source inform ation regardless o f the form that information is in. That essentially m eans that the source inform ation has to be processed in a certain way that will make it suitable for further digital processing. L et s consider all three possible forms o f source information that have been m entioned earlier and the way they are formatted in order to be com patible w ith the next stages in a digital com m unication system. The sm allest headache would give us an information source, or data, that is already in the digital form. It will sim ply just bypass the form atting stage and proceed to the next processing step. The problem begins with the fact that the most data in com m unications is either in textual or in analog format. Data in textual format would usually be encoded w ith one o f several standards such as: ASCII (Am erican Standard Code for Information Interchange), EB CDIC (Exted Binary Coded Decimal Interchange Code), Baudot, and H ollerith. The aforem entioned standards transform the textual data into a digital form at. N ow, w e com e to analog information at the source and we are interested how it can be m ade suitable for further digital processing. Simply said, we would like to determ ine the necessary conditions which will allow us to change analog information to digital one w ithout loss o f inform ation. As a criterion o f how well the process o f converting the analog inform ation into the digital one can be carried out, we use an important condition that the original inform ation can be fully reconstructed by using reversible, digital to analog processing steps.

In essence, analog inform ation is form atted using three separate steps: sampling, quantization, and coding. 2.1.1 Sam pling The link betw een an analog signal and the corresponding digital signal is given by w hat is know n as the sam pling theorem. The sam pling theorem states the following: A real valued band limited signal having no spectral com ponents above a frequency o f fm (H z), know n as the m axim um frequency o f the analog signal, is determ ined uniquely by its values at uniform intervals spaced no grater than Tg seconds apart, w here Tg is: ( 0 This statem ent is a sufficient condition that an analog signal can be reconstructed com pletely from a set o f uniformly spaced discrete samples in time. The output o f the sam pling process is called the pulse amplitude m odulation (PAM ) because the successive output intervals can be described as a sequence o f pulses with amplitudes derived from the sam ples o f the analog signal. If equation stated in (1) is applied, all replicas o f the original spectral density are ju st tangent to each other and an ideal low pass filter can be used to reconstruct (theoretically) the original analog signal from the sampled version o f that signal. H ow ever, if the sam pling interval Tg becom es slightly larger than the right side o f the equation (1), then there will be an overlap o f spectral densities and the original signal will not be successfully reconstructed from its sampled version with the help o f an ideal low pass filter. In order to avoid the situation described in previous sentence it is essential and absolutely necessary in the process o f sampling o f an analog signal that: ' ' ' i t - The equation given in (2) is a m athem atical interpretation o f the sampling theorem. The m axim um tim e interval Tg is called the N yquist interval. If we want to see the equation (2) as the relationship betw een the maximum frequency o f the analog signal f ^ and the sam pling frequency fg, where fg is reversely proportional to Tg then we arrive to the follow ing equation: (2)

The equation (3) given in term s o f the sampling and maximum frequencies is called the N yquist Sam pling Rate. In practice, the full potential o f the sam pling theorem usually cannot be realized and the equations (2) and (3) serve as upper bounds on actual perform ance. O ne reality fact that we are faced with in dealing with the sam pling theorem is that w e cannot build an ideal low pass filter. We can only build a low pass filter w ith as fast an attenuation rate as possible. One thing that we can do to overcome the inability to have an ideal low pass filter is to increase the sampling frequency to allow som e frequency space before the next frequency replica o f the sampled analog signal appears. A nother reality fact being responsible for the sam pling theory not being used in its full potential is the fact that a time limited signal is never strictly band limited. When such an analog signal is sam pled, there will always be some unavoidable overlap o f spectral com ponents. Furtherm ore, in reconstructing the sampled version o f the signal, frequency com ponents originally located above one half o f the sampling frequency will appear below this point and will be passed by the low pass filter. This is known as aliasing and results in a distortion o f the signal. The effects o f aliasing can be partially elim inated by applying as good as possible low pass filtering before sampling and by sam pling at rates greater than the N yquist rate. A n interesting question arises here; if we want to apply the sampling theorem to bandpass signals, do w e still have to obey the rule given by the equation (2), stating that we have to sam ple bandpass signals at twice the highest frequency? The answer would be no, since the m inim um sam pling rate deps on the bandw idth o f a low pass signal rather then on its highest frequency. In the case o f low pass signals these tw o conditions coincide. H ow ever, w hen sam pling a bandpass signal we should use a minimum sam pling rate in the range betw een 2 and 4 times the bandwidth o f the signal. This minimum rate requirem ent for bandpass signals approaches the limit o f twice the bandwidth as the center frequency o f the signal increases. 2.1.2 Q uantization A fter the sam pling, there comes the second step in formatting o f an analog signal; quantization. Q uantization or quantizing is the task o f m apping samples o f an analog signal, obtained through the process o f sampling, to a finite set o f amplitudes. To make an analogy w ith the sam pling process, the process o f quantization is related to the y- ordinate sim ilarly as the process o f sampling is related to the x-axis. The sim plest quantizer perform s m apping o f each sample o f the sampled analog signal to one o f the

10 predeterm ined quantizer levels. If those predeterm ined quantizer levels are equally spaced then we say that the quantizer is a linear quantizer. Similarly, if the levels are not equally spaced then we say that the quantizer is a nonlinear quantizer. Since this project deals with the linear quantization only, the further discussion on the process o f the quantization o f a sam pled analog signal will be strictly limited to a linear quantizer and its characteristics. H ow ever, it is important to m ake a comment that the nonlinear quantization provides much better the signal to noise ratio (SNR) than the linear quantization does. This is particularly evident in speech com m unication where very low speech volum es predom inate 70% o f the time. A linear quantizer is the universal form o f the quantizer in a sense that it makes no assum ption about the amplitude statistics and correlation properties o f the input analog signal. The only tw o conditions that have to be known in order to implement a linear quantizer are: the dynam ic range o f the sampled signal (DR) and the number o f bits that each sam ple is represented. The dynam ic range is defined as: DR = [max_ sig - min_ sig], (4) w here 'm ax sig' and 'm in sig' are the maximum and minimum values o f the sampled analog signal. The second condition, the number o f bits that each sample in the sampled version o f the analog signal will be represented with, is directly related to the num ber o f levels o f the desired quantizer. The relation between the num ber o f bits for each sample representation and the num ber o f levels o f the quantizer is given by: L = 2'^. (5) In equation (5) L is the num ber o f the levels o f the quantizer and R is the num ber o f the bits that each sam ple is represented with. Now, by taking a logarithm with the base 2 to each side o f the equation (5), we get another relation between the num ber o f bits for each sam ple representation and the num ber o f levels o f the quantizer: R = log2 L. (6) At this point it is essential to reveal a restriction and an observation associated with the process o f quantization. The restriction is that the equations (5) and (6) are valid only if w e int to apply the fix length representation o f each sample. In the case o f variable length representation, the equations (5) and (6) are no longer valid. M ore overview and discussion on fixed and variable length representations o f sam ples will be given later in

11 C hapter 3, when we deal with the Hutfrnan code algorithm. The observation to be revealed is that once quantized, the instantaneous values o f the analog signal can never be ex actly reconstructed again. O nce we establish a desired num ber o f levels for the quantizer, we can easily determine the size o f each quantizer level, usually called the quantile interval. A rem ainder that a quantizer has all its quantile intervals o f the same size only if that quantizer is a uniform quantizer. The size o f each quantile interval q is given by the following equation: q _ (max s ig -m in sig) A linear quantizer w orks in a very sim ply way. Inputs to the quantizer are samples o f an analog signal obtain through the process o f sampling, while the outputs from the quantizer are predeterm ined values o f the quantizer s levels that the samples are mapped to. The difference betw een the input and output o f the quantizer is called the quantization error. M athem atically, the quantization error is represented as: e ( n ) ^ x ( n ) - x ( n ), (8) w here x(n) represents the input vector containing the signal samples, x(n) is the quantized output vector, while e(n) is the error vector. The quantization error vector is also referred to as the quantization noise. U nder the restriction that the input signal has a sm o o th probability density function (PDF) over the quantization interval, it can be assum ed that the quantization errors are uniformly distributed over the quantization interval ranging from -q /2 to q/2. Each probability density function must be greater or equal to zero and m ust satisfy the following condition: q / 2 -q/2 f ( x ) d x = l. (9) In o rd er to satisfy the equation (9), the probability density function o f the quantization error p(e) m ust be equal to 1/q in the interval from -q/2 to q/2. Outside that given interval p(e) m ust be equal to zero. Here is m athem atically interpretation o f what has just been said:

12 p(e) = 1. 1 1 for < e < q q q 0 otherw ise ( 10) A useful figure o f m erit for a uniform quantizer is the error variance, where the error variance is defined as: q/2 = ( e - m x ) ^ p(e)-de, -q/2 ( 11) w here m ^ is the error m ean that is equal to: m. q/2 q/2 - e p ( e ) d e = e de = J J a a -q/2 -q/2 q/2 -q/2 = 0. (12) N ow, by placing the error mean m ^ from the equation (12) into the equation (11) we get the error variance: q/2 <3^= J e ^ - p ( e ) d e = -q/2 q/2 2 _ q _ -q/2 (13) The error variance is also com m only known as the noise power. Similarly, the signal variance (signal pow er) is defined as: (js = j x ^ p ( x ) d x. (14) and can be substituted with the expression for the peak pow er o f an analog signal norm alized to 1 Q: P s = V s I s = V s ^ = Vs2 = fvppl JhlS] Rs I 2 J I 2 j (15)

13 The ratio betw een the signal variance (signal power) given in the equation (15) and the noise pow er given in the equation (13) yields the quantization signal to noise ratio, (SNR)q: L2-q2 noise power = ^ = (16) q ^ 12 The (SNR)q is usually given in the units o f decibels [db] and is obtained by applying the follow ing conversion formula: (SNR)q[dB] = 10-logio(SNR)q. (17) From the equation (16) we conclude that the signal to noise ratio for a uniform quantizer solely deps on the num ber o f levels L. Since from the equations (5) and (6), the num ber o f levels L is directly proportional to the num ber o f bits used to represent each sam ple, thus we can also say that the signal to noise ratio deps on the number o f bits used to represent each sample. The more bits are used to represent a sample, the better the signal to noise ratio. It is easily proven that for each additional bit used to represent a sam ple (e.g. increase from a 5-bit sample representation to a 6-bit sample representation), the im provem ent in the (SNR)q is approxim ately about 6 db. A quick approxim ation o f the (SNR)q for a quantizer is to multiply the num bers o f bits that each sample is represented with by 6 db. However, the real quantization (SNR)q is much sm aller due to the im perfections in the quantizer itself (linear vs. nonlinear quantizer). As the number o f levels L approaches infinite, the signal approaches to its form before the quantization (PA M form at) and the (SNR)q approaches to infinity. In other words, with the infinite num ber o f quantization levels, ultimately there is no quantization noise. 2.1.3 Pulse Code M odulation (PCM) The next step in the process o f converting an analog signal into a digital one is to assign a digital value for each quantile interval in such a way that each interval has a one to one correspondence with the set o f real integers. This is called the digitization. The process o f digitization reduces the original analog signal to a set o f digits, at the successive sam ple tim es, m apped into L quantizer s levels. Each sample o f the original an a lo g signal is assigned a quantization level (quantile interval) closest to the value o f the

14 actual sam ple. The digits are expressed in a coded form. The most common code used for this purpose is a binary code where each digit is represented as a com bination o f zeros and ones. Each binary 1 is further represented b) a pulse and each binary 0 is represented by the absence o f a pulse. Thus, instead o f transm itting the individual samples, a com bination o f zeros and ones, binary pulse code, is sent at each sample tim e carrying the inted inform ation in digitized form. Com m unications systems m aking use o f this kind o f data representation during the transmission are commonly called pulse code m odulation system s; PCM systems. W aveform carrying inform ation can be transmitted even more efficiently if we represent them as sequences o f transitions between upper and lower voltage levels. When the w aveform is at the upper voltage level it represents a 1. Similarly, w hen the waveform is at the low er voltage level it conveys a 0. There are many PCM waveform types classified in m any groups. Here only a few will be mentioned: N on Return to Zero (NRZ), Return to Zero (RZ), Phase Encoded, and M ultilevel binary. The most commonly used are NRZ PC M w aveform s. The reason why there are so many different types o f PCM waveform s lies in differences in perform ances for different waveform coding schemes. Some schem es are better in perform ing error detections, some are better in correcting data errors, some schem es again are better in increasing the efficiency o f bandwidth utilization. Certain types o f PCM waveform s are more immune than others to noise. All this contributes that so many o f PCM waveform schemes are used, but the decision which one and w hen to be used greatly deps on required perform ances and characteristics o f the digital system used. 2.1.4 M a tla b Im p lem en tatio n o f F o rm a ttin g The form atting process explained in preceding sections is simulated in M atlab w ith three M files: Sam pling.in' and Q uantization.m files that describe the sampling and quantization processes and PC M.m file that deals with the process o f digitization, w here each quantized sam ple is coded to a corresponding binary word. In the S am pling.m file an analog signal in the form o f a sound file ( speech.w av ) is loaded in the M atlab environm ent. M atlab provides the command w avread which performs sam pling autom atically w hen the sound file is loaded into the environm ent. The standard sam pling rates for PC based audio hardware are 8,000, 11,025. 22,050, and 44,100 sam ples per second. M ono signals are returned as one column matrix, while stereo signals are returned as two column matrices. The first colum n o f a stereo audio matrix corresponds to the left input channel, while the second column corresponds to the right input channel. The w avread com m and, after being executed, returns two output

15 variables, the sam pled data and the sample rate. The sample rate used for the speech file used in this project was 22,050Hz. Let s see how the sam pling frequency is obtained. I f w e plot the frequency spectrum o f the speech signal as shown in Figure 2 we see that the m axim um frequency o f the signal is 11,025 Hz. Now, if we recall the equation (3) stating that the N yquist Sam pling Rate must be equal or greater than the maximum frequency o f the signal, it is clear why the sample rate that our signal is sampled with is 22,050 Hz. 1800 S p ectru m of th e S p e e c h Signal 1600 1400 1200 1000 BOG 600 400 200 1 ^ 1. ;.. 6 8 F requen cy [Hz] 10 12 14 X10 Figure 2: Frequency Spectrum o f the Signal speech.w av. The sam pled data contains 110,033 samples. That num ber can be obtained through a sim ple calculation where the num ber o f samples is equal to the product o f the duration o f the signal (in seconds) and the number o f sam ples for 1 second. W e already know that the num ber o f sam ples for 1 second is equal to the sam pling frequency, which is 22,050, and w e can conclude from the Figure 3, which shows the ASCII representation o f the speech signal, that the duration o f the speech signal is 4.9902 seconds. A simple calculation gives us the num ber o f samples that the original speech signal is represented after the sam pling process; 4.9902 X 22,050 = 110,033 samples.

16 ASCII Version of The Speech Signal 2 2.5 3 time [sec.] Figure 3; Signal speech.w av in Tim e Dom ain. In the Q unatizing.m file, as per earlier discussion (the equations (4), (5), and (6)), we firstly have to specify the dynam ic range o f the sam pled signal as well as the num ber o f levels th a t our quantizer will have. The M atlab com m ands, m ax and m in, allow us to obtain the dynam ic range o f the sam pled signal (see the equation (4)). The num ber o f levels L is chosen to be 256 since we deal w ith a speech signal. Speech signals in general require 8 bits per sam ple since it is the m inim um num ber o f bits that can allow us to hear original speeches in via their quantized versions. A pplying the equation (5), we see how the num ber o f quantization levels is obtained. N o w w hen w e know the dynam ic range o f the sam pled signal and the num ber o f levels for the quantizer, we can calculate the quantization step q that is also the size o f the quantile interval. O nce again, since w e deal with a uniform quantizer each one o f 256 quantile intervals w ill have the sam e size. To obtain the value o f q we sim ply apply the equation (7). U pon its execution, the M atlab com m and quantiz produces as the output tw o variables for each sam ple o f the sam pled signal; the quantization index indexl and quantized outp u t value q u a n ts l : [indexl,quant; si ] quant, iz (s i g,part it ionl,codebookl ),

17 w here p a rtitio n! is a real vector w hose elem ents are values o f 256 quantizer s levels assigned to each sam ple. The elem ents o f the partition! vector m ust be given in strictly ascing order. The input variable codebookl is a vector codebook that prescribes a value for each partition in the quantization and its length exceeds the length o f the p artitio n! vector by one. As m entioned earlier, the output variables are; index 1 and q u a n ts!. I f the partition! vector has the length o f n, then the index! vector is a colum n vector w hose k^^ entry is given by: index (k ) = 0 if sig(k) < partition( 1) m if partition( m) < sig(k) < partition( m + 1). n if partition( n) < sig (k) T he output variable quants 1 is a row vector whose length is the sam e as the length o f the input sam pled signal. The row vector quants 1 contains the quantization o f the sam pled signal based on the quantization levels and prescribed values. The quants! is related to the codebookl and index! variables by the follow ing equation: quants(k) = codebook (index(k) +1). In this equation, k takes on integer values between 1 and the length o f the sam pled signal. T he variable indexl contains elem ents (decim al num bers) that represent the m em bership o f each sam ple to one o f 256 quantization levels. In other words, those decim al num bers represent which level L each sam ple has been assigned to. Any num ber fro m 0 to 255 is a valid num ber that a sample can be assigned to. The task o f PC M.m M at lab file is to convert the decim al index num ber, indexl (k), o f each sam ple to a related co d e w ord. Since code w ords are supposed to be in the binary form, thus each code w ord is created as a com bination o f binary ones and zeros. T he conversion from decim al to binary is perform ed by the convert2binary.m M at lab file. T he code does the following: It takes each elem ent o f the index! vector and keeps div id in g the value by 2 until the result is 1, keeping the track o f rem ainders for each division. W hen 1 is reached as a result o f dividing, the rem ainders are apped to the resulting 1, form ing the binary representation o f the decim al value. Here is a quick exam ple: L e t s co n v ert the num ber 109 to its binary representation:

18 1 0 9 :2 = 5 4 (1 ), 54 : 2 = 2 7 (0 ), 27 : 2 = 13(1), 13 : 2 = 6 ( 1), 6 : 2 = 3 (0), 3 : 2 = 1 ( 1). T he resulting binary representation o f the num ber 109 is: I 1 0 1 1 0 1. Here is also the result obtained by the M atlab file d 2b.m file, w here the d2b.m file is the m odified version o f the convert2binary and can be used separately from the main PC M.m file that perform s the process o f digitization ( d2b.m is listed in the A ppix B section o f this report): ans = 1 1 0 1 1 0 1. T here is now one m ore thing that has to be taken care of. In the equations (5) and ( 6) w e have given the relation betw een the num ber o f levels o f the quantizer and the num ber o f bits that each code w ord should be com posed of. Since we use a 256 level quantizer in this project, by applying the equation ( 6) we obtain that each code word should consist o f 8 bits. H ow ever, we know that only the num bers from 128 to 255 need 8 bits for their binary representation. So, what happens with the num bers from 0 to 127 th at do not need all 8 bit for their binary representations (e.g. the num ber 109 needs 7 bits, see the exam ple above). W ell, to have each num ber in the range from 0 to 255 represented by 8 bits, w e have to add the so-called m issing' bits. For instance, the num ber I0 9 s binary representation is a 7 bit com bination o f ones and zeros: 1 1 0 1 1 0 1. To obtain a 8 bit binary representation we have to concatenate one 0 at the beginning, so th at the resulting 8 bit representation is: 0 1 1 0 1 1 0 1. The file convert2binary.m takes care that every num ber in the [0,255] interval is represented with 8 bits creating socalled a fixed length representation o f code words. A t this point w e also have to create a stream o f bits m ade up o f the binary representation (code w ord) for each sym bol in the inform ation that is to be transm itted. The code words are place in the stream in the FIFO m anner, m eaning that the code word for a sym bol that is to be sent out first is placed at the beginning o f the stream, the second code word is apped onto the first one and so on until all sym bols from the information are processed.

19 2.2 Image Formatting M atlab stores a gray level image in the form o f a tw o-dim ensional matrix, where each elem ent o f the m atrix corresponds to a single pixel in the displayed image. For exam ple, an im age com posed o f 200 rows and 300 colum ns o f differently colored dots w ould be stored in M atlab as a 200 by 300 m atrix. Som e images, such as RGB (Red- G reen-b lue) im ages, require a three-dim ensional m atrix to be stored in, where the first plane in the th ree dim ensions represents the red pixels intensities, the second plane represents the green pixels intensities, and the third plane represents the blue pixels in tensities. I f there is a need to convert a gray level image into a color level image, a straight forward conversion is used. T hat is exactly what has been done in this project. A color image is converted into gray level im age in order to save com putational tim e as well as m em ory space. T hat m eans th at instead o f processing three tw o-dim ensional m atrices for a color im age, w e w ill be processing only one 2-dim ensional m atrix for a gray level image. M em ory w ise that m eans that instead o f using 3 bytes o f m em ory storage for each colored pixel, w e w ill need only 1 b>te to store the num ber representing one pixel. The conversion w ill be carried by using the following formula: A = 0.2989 i(:,:,l)+ 0.5870 i(:,:,2) + 0.1140-i(:,:,3). (18) In the equation (18), A is a 2-dim entional m atrix representing a gray level image, I(:,:,l) is the R ed In ten sity part o f the color image, 2) is the G reen Intensity part o f the color im age, and ï(:,:,3 ) is the Blue Intensity part o f the color image. The coefficients th at the intensities are m ultiplied with in the equation ( 18): 0.2989, 0.5870, and 0.1140 are related to the eye's sensitivity to a Red, Green, and Blue color according to the N TSC (N ational T elevision System (s) Com m ittee) standard and are obtained throughout an experim ental way. A s m entioned earlier each pixel in a gray level representation ( 2-dim entional m atrix) will be represented by 8 bits. T his m eans that intensity o f each pixel will be in the range o f 0 to 255, w here 0 represents the pure black color, while 255 represents the solely w hite color. T his can be show n by a sim ple M atlab program that displays a 7 by 7 gray level im age representing a black diagonal line as pictured in Figure 4:

20 2 3 4 5 6 7 Figure 4: Grey Level Im age o f a D iagonal Black Line. H ere is the sim ple M atlab code that displays the image show n in Figure 4:» A = [ 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 0; 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 0 2 5 5, 2 5 5 2 5 5 2 5 5 2 5 5 0 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 0 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 0 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 0 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 0 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 2 5 5 ] ;» im a g e (A );» c o l o r m a p ( g r a y ( 2 5 6 ) ) ;» im w r ite ( A, 'd i a g j i n e ', 'J P G ');» A = im r e a d ( 'd ia g _ lin e ', 'J P G '); A t the first double-prom pt line a 2-dim ensional m atrix A o f the size 7 x 7 (49 pixels) is created w here the all 255 num bers represent pure white intensity o f the pixels, w hile the all 0 num bers represent pure black intensity o f the pixels. N ext, the m atrix is displayed using M atlab im age com m and. The colorm ap(gray(256)) com m and sets the co lo r m ap o f the im age to black and white (gray level image). The im w rite com m and saves the im age as diag_line.jpg file. If we now want to import the file diag_line.jpg into the M atlab environm ent for any further processing and m anipulation, we would use