Examples MAT-INF1100. Øyvind Ryan

Size: px
Start display at page:

Download "Examples MAT-INF1100. Øyvind Ryan"

Transcription

1 Examples MAT-INF00 Øyvind Ryan February 9, 20

2 Example 0.. Instead of converting 76 to base 8 let us convert it to base 6. We find that 76//6 = 2 with remainder. In the next step we find 2//6 = 4 with remainder. Finally we have 4//6 = 0 with remainder 4. Displayed in a table this becomes Recall that in the hexadecimal system the letters a f usually denote the values 0. We have therefore found that the number 76 is written eb 6 in the hexadecimal numeral system. Example 0.2. Let us continue to use the decimal number 76 as an example, but now we want to convert it to binary form. If we perform the divisions and record the results as before we find In other words we have 76 = This example illustrates an important property of the binary numeral system: Computations are simple, but long and tedious. This means that this numeral system is not so good for humans as we tend to get bored and make sloppy mistakes. For computers, however, this is perfect as computers do not make mistakes and work extremely fast. Example 0.. Let us convert the hexadecimal number c 6 to binary. We have 6 = 00 2, c 6 = 00 2, 6 = 00 2, 2

3 which means that c 6 = where we have omitted the two leading zeros. Example 0.4. Let us convert the number x = 0.a8 6 to binary. From table?? we find 6 = 00 2, a 6 = 00 2, 8 6 = 000 2, which means that 0.a8 6 = = Example 0.. To convert the binary number to hexadecimal form we note from table?? that 00 2 = c 6, 00 2 = 9 6, 00 2 = 6 6, = 8 6. Note that the last group of binary digits was not complete so we added three zeros. From this we conclude that = 0.c Example 0.6. Let us find the representation of x = 6 in two s complement from (??) when n = 4. In this case x = 6 so x = = 0. The binary representation of the decimal number 0 is 00 and this is the representation of 6 in two s complement. Example 0.7. Let us consider a concrete example of how the UTF-8 code of a code point is determined. The ASCII characters are not so interesting since for these characters the UTF-8 code agrees with the code point. The Norwegian character i smorechalleng ing.i f wechecktheunicodechar t s, we f indthat thi schar acterhasthecodep 97. This is in the range which is covered by rule 2 in fact??. To determine the UTF-8 encoding we must find the binary representation of the code point. This is easy to deduce from the hexadecimal representation. The least significant numeral ( in our case) determines the four least significant bits and the most significant numeral (c) determines the four most significant bits. Since = 00 2 and c 6 = 00 2, the code point in binary is c {}}{{}}{ , The Latin supplement can be found at

4 where we have added three 0s to the left to get the eleven bits referred to by rule 2. We then distribute the eleven bits as in (??) and obtain the two bytes 0000, In hexadecimal this corresponds to the two values c and 8 so the UTF-8 encoding of i sthet wo bytenumber c8 6. Example 0.8. On a typical calculator we compute x = 2, then y = x 2, and finally z = y 2, i.e., the result should be z = ( 2 ) 2 2, which of course is 0. The result reported by the calculator is z = This is a simple example of round-off error. Example 0.9 (Examples of truncation). The number 0. truncated to 4 digits is 0., while 28.4 truncated to 2 digits is 20, and truncated to 4 digits is Example 0.0 (Examples of rounding). The number 0. rounded to 4 digits is 0.. The result of rounding 28.4 to 2 digits is 0, while rounded to 4 digits is Example 0. (Standard case). Suppose that a =.64 and b = We convert the numbers to normal form and obtain a = , b = We add the two significands =.466 so the correct answer is The last step is to convert this to normal form. In exact arithmetic this would yield the result However, this is not in normal form since the significand has five digits. We therefore perform rounding, , and get the final result Example 0.2 (One large and one small number). If a = 42.4 and b = 0.00 we convert the largest number to normal form 42.4 =

5 The smaller number b is then written in the same form (same exponent) 0.00 = The significand in this second number must be rounded to four digits, and the result of this is The addition therefore becomes = Example 0. (Subtraction of two similar numbers I). Consider next a case where a = 0.4 and b = 0.27 have opposite signs. We first rewrite the numbers in normal form a = , b = We then add the significands, which really amounts to a subtraction, = Finally, we write the number in normal form and obtain a + b = = () Example 0.4 (Subtraction of two similar numbers II). Suppose that a = 0/7 and b =.42. Conversion to normal form yields Adding the significands yield 0 7 a = , b = = When this is converted to normal form, the result is while the true result rounded to four correct digits is (2)

6 Example 0.. Consider the two numbers a = 2.7 and b = 6.79 which in normalised form are a = , b = To multiply the numbers, we multiply the significands and add the exponents, before we normalise the result at the end, if necessary. In our example we obtain a b = The significand in the result must be rounded to four digits and yields the floatingpoint number i.e., the number , Example 0.6. We use the same numbers as in example 0., but now we perform the division a/b. We have a b = = , i.e., we divide the significands and subtract the exponents. The division yields 0.27/ We round this to four digits and obtain the result a b =.487. Example 0.7. Suppose we are going to evaluate the expression x 2 + x () for a large number like x = 0 8. The problem here is the fact that x and x 2 + are almost equal when x is large, x = 0 8 = , x Even with 64-bit floating-point numbers the square root will therefore be computed as 0 8, so the denominator in () will be computed as 0, and we get division by 0. This is a consequence of floating-point arithmetic since the two terms 6

7 in the denominator are not really equal. A simple trick helps us out of this problem. Note that x 2 + x = ( x x) x 2 ( x 2 + x)( x x) = + + x x 2 + x 2 = x x. This alternative expression can be evaluated for large values of x without any problems with cancellation. The result for x = 0 8 is x x where all the digits are correct. Example 0.8. For most values of x, the expression cos 2 x sin 2 x can be computed without problems. However, for x = π/4 the denominator is 0, so we get division by 0 since cos x and sin x are equal for this value of x. This means that when x is close to π/4 we will get cancellation. This can be avoided by noting that cos 2 x sin 2 x = cos2x, so the expression (4) is equivalent to cos2x. This can be evaluated without problems for all values of x for which the denominator is nonzero, as long as we do not get overflow. Example 0.9. Consider the equation x n+2 2 x n+ x n = 0, x 0 =, x = 0. () Since the two roots of the characteristic equation r 2 2r / / = 0 are r = and r 2 = /, the general solution of the difference equation is ( x n = C + D ) n. The initial conditions yield the equations C + D =, C D/ = 0, which has the solution C = /4 and D = /4. The solution of () is therefore x n = 4( + ( ) n n). (4) 7

8 We observe that x n tends to /4 as n tends to infinity. If we simulate equation () on a computer, the next term is computed by the formula x n+2 = (2x n+ + x n )/. The division by means that floating-point numbers are required to evaluate this expression. If we simulate the difference equation, we obtain the four approximate values x 0 = , x = , x 20 = , x 0 = , (throughout this section we will use x n to denote a computed version of x n ), which agree with the exact solution to 2 digits. In other words, numerical simulation in this case works very well and produces essentially the same result as the exact formula, even if floating-point numbers are used in the calculations. Example We consider the difference equation x n+2 9 x n+ + 2x n = 0, x 0 = 2, x = 8/. (6) The two roots of the characteristic equation are r = / and r 2 = 6, so the general solution of the homogenous equation is x h n = C n + D6 n. To find a particular solution we try a solution x p n = A which has the same form as the right-hand side. We insert this in the difference equation and find A =, so the general solution is x n = x h n + xp n = +C n + D6 n. (7) If we enforce the initial conditions, we end up with the system of equations 2 = x 0 = +C + D, 8/ = x = +C / + 6D. This may be rewritten as C + D =, C + 8D =. which has the solution C = and D = 0. The final solution is therefore (8) (9) x n = n, (0) 8

9 which tends to when n tends to infinity. Let us simulate the equation (6) on the computer. As in the previous example we have to divide by so we have to use floating-point numbers. Some early terms in the computed sequence are x = , x 0 = , x = These values appear to approach as they should. However, some later values are x 20 = , x 0 = , x 40 = , and at least the last two of these are obviously completely wrong! Example 0.2. We consider the third-order difference equation x n+ 6 x n x n+ 4 x n = 0 2 n, x 0 = 2, x = 7, x 2 = The coefficients have been chosen so that the roots of the characteristic equation are r = /, r 2 = and r = 4. To find a particular solution we try with x p n = A2 n. If this is inserted in the equation we find A =, so the general solution is () x n = 2 n + B n +C + D4 n. (2) The initial conditions force B = 0, C = and D = 0, so the exact solution is x n = 2 n. () The discussion above shows that this is bound to lead to problems. Because of round-off errors, the coefficients B and D will not be exactly 0 when the equation is simulated. Instead we will have x n = 2 n + ɛ n + ( + ɛ 2 ) + ɛ 4 n Even if ɛ is small, the term ɛ 4 n will dominate when n becomes large. This is confirmed if we do the simulations. The computed value x 00 is approximately , while the exact value is.8 0 0, rounded to two digits. 9

10 Example Suppose we have the text x = DBACDBD. We note that the frequencies of the four symbols are f (A) =, f (B) = 2, f (C ) = and f (D) =. We assign the shortest codes to the most frequent symbols, c(d) = 0, c(b) =, c(c ) = 0, c(a) = 0. If we replace the symbols in x by their codes we obtain the compressed text z = 00000, altogether 9 bits, instead of the 6 bits (7 bytes) required by a standard text representation. However, we now have a major problem, how can we decode and find the original text from this compressed version? Do the first two bits represent the two symbols D and B or is it the symbol C? One way to get round this problem is to have a special code that we can use to separate the symbols. But this is not a good solution as this would take up additional storage space. Example 0.2. Consider the same four-symbol text x = DBACDBD as in example 0.2. We now use the codes We can then store the text as c(d) =, c(b) = 0, c(c ) = 00, c(a) = 000. (4) z = , () altogether bits, while a standard encoding with one byte per character would require 6 bits. Note also that we can easily decipher the code since the codes have the prefix property. The first bit is which must correspond to a D since this is the only character with a code that starts with a. The next bit is 0 and since this is the start of several codes we read one more bit. The only character with a code that start with 0 is B so this must be the next character. The next bit is 0 which does not uniquely identify a character so we read one more bit. The code 00 does not identify a character either, but with one more bit we obtain the code 000 which corresponds to the character A. We can obviously continue in this way and decipher the complete compressed text. Example In figure?? the tree in figure?? has been turned into a Huffman tree. The tree has been constructed from the text CCDACBDC with the alphabet {A,B,C,D} and frequencies f (A) =, f (B) =, f (C ) = 4 and f (D) = 2. It is easy to see that the weights have the properties required for a Huffman tree, and by following the edges we see that the Huffman codes are given by c(c ) = 0, c(d) = 0, c(a) = 0 and c(b) =. Note in particular that the root of the tree has weight equal to the length of the text. 0

11 Example 0.2. Let us try out algorithm?? on the text then the hen began to eat. This text consists of 2 characters, including the five spaces. We first determine the frequencies of the different characters by counting. We find the collection of one-node trees 4 t h e n b g 2 a o where the last character denotes the space character. Since b and g are two characters with the lowest frequency, we combine them into a tree, 4 t h e n 2 b g 2 a o The two trees with the lowest weights are now the character o and the tree we formed in the last step. If we combine these we obtain 4 t h e n 2 b g o 2 a Now we have several choices. We choose to combine a and h, 4 t 2 h a e n 2 b g o

12 At the next step we combine the two trees with weight, 4 t 2 h a e 6 2 b g o n Next we combine the t and the e, 9 4 t e 2 h a 6 2 b g o n We now have two trees with weight that must be combined 9 4 t e 0 2 h a 6 2 b g o n 2

13 h a 4 n t e 2 o b g Figure : The Huffman tree for the text then the hen began to eat. Again we combine the two trees with the smallest weights, h a 4 n t e 2 o b g By combining these two trees we obtain the final Huffman tree in figure. From this we can read off the Huffman codes as c(h) = 000, c(a) = 00, c( ) = 0, c(b) = 0000, c(g ) = 000, c(o) = 00, c(n) = 0, c(t) = 0, c(e) =.

14 so we see that the Huffman coding of the text then the hen began to eat is The spaces and the new line have been added to make the code easier to read; on a computer these will not be present. The original text consists of 2 characters including the spaces. Encoding this with standard eight-bit encodings like ISO Latin or UTF-8 would require 400 bits. Since there are only nine symbols we could use a shorter fixed width encoding for this particular text. This would require five bits per symbol and would reduce the total length to 2 bits. In contrast the Huffman encoding only requires 7 bits. Example Let us return to example 0.26 and compute the entropy in this particular case. From the frequencies we obtain the probabilities c(t) = 4/2, c(h) = /2, c(e) = /, c(n) = /2, c(b) = /2, c(g ) = /2, c(a) = 2/2, c(o) = /2, c( ) = /. We can then compute the entropy to be H 2.9. If we had a compression algorithm that could compress the text down to this number of bits per symbol, we could represent our 2-symbol text with 74 bits. This is only one bit less than what we obtained in example 0.26, so Huffman coding is very close to the best we can do for this particular text. Example Suppose that we have a two-symbol alphabet A = {0,} with the probabilities p(0) = 0.9 and p() = 0.. Huffman coding will then just use the obvious codes c(0) = 0 and c() =, so the average number of bits per symbol is, i.e., there will be no compression at all. If we compute the entropy we obtain H = 0.9log log So while Huffman coding gives no compression, there may be coding methods that will reduce the file size to less than half the original size. Example 0.28 (Determining an arithmetic code). We consider the two-symbol text As for Huffman coding we first need to determine the probabilities of the two symbols which we find to be p(0) = 0.8 and p() = 0.2. The idea is to allocate different parts of the interval [0,) to the different symbols, and let the length of the subinterval be proportional to the probability of the symbol. In our case we allocate the interval [0,0.8) to 0 and the interval [0.8,) to. Since 4

15 our text starts with 0, we know that the floating-point number which is going to represent our text must lie in the interval [0,0.8), see the first line in figure??. We then split the two subintervals according to the two probabilities again. If the final floating point number ends up in the interval [0,0.64), the text starts with 00, if it lies in [0.64,0.8), the text starts with 0, if it lies in [0.8,0.96), the text starts with 0, and if the number ends up in [0.96,) the text starts with. This is illustrated in the second line of figure??. Our text starts with 00, so the arithmetic code we are seeking must lie in the interval [0,0.64). At the next level we split each of the four sub-intervals in two again, as shown in the third line in figure??. Since the third symbol in our text is, the arithmetic code must lie in the interval [0.2,0.64). We next split this interval in the two subintervals [0.2,0.644) and [0.644,0.64). Since the fourth symbol is 0, we select the first interval. This interval is then split into [0.2, 0.992) and [0.992, 0.644). The final symbol of our text is 0, so the arithmetic code must lie in the interval [0.2,0.992). We know that the arithmetic code of our text must lie in the half-open interval [0.2,0.992), but it does not matter which of the numbers in the interval we use. The code is going to be handled by a computer so it must be represented in the binary numeral system, with a finite number of bits. We know that any number of this kind must be on the form i /2 k where k is a positive integer and i is an integer in the range 0 i < 2 k. Such numbers are called dyadic numbers. We obviously want the code to be as short as possible, so we are looking for the dyadic number with the smallest denominator that lies in the interval [0.2,0.992). In our simple example it is easy to see that this number is 9/6 = In binary this number is , so the arithmetic code for the text 0000 is 00. Example Suppose we have the text x = {ACBBC AAB AA} and we want to encode it with arithmetic coding. We first note that the probabilities are given by p(a) = 0., p(b) = 0., p(c ) = 0.2, so the cumulative probabilities are F (A) = 0., F (B) = 0.8 and F (C ) =.0. This means that the interval [0,) is split into the three subintervals [0,0.), [0.,0.8), [0.8,). The first symbol is A, so the first subinterval is [a,b ) = [0,0.). The second symbol is C so we must find the part of [a,b ) that corresponds to C. The mapping from [0,) to [0,0.) is given by g 2 (z) = 0.z so [0.8,] is mapped to [a 2,b 2 ) = [ g 2 (0.8), g 2 () ) = [0.4,0.).

16 The third symbol is B which corresponds to the interval [0.,0.8). We map [0,) to the interval [a 2,b 2 ) with the function g (z) = a 2 + z(b 2 a 2 ) = z so [0.,0.8) is mapped to [a,b ) = [ g (0.), g (0.8) ) = [0.4,0.48). Let us now write down the rest of the computations more schematically in a table, g 4 (z) = z, x 4 = B, [a 4,b 4 ) = [ g 4 (0.), g 4 (0.8) ) = [0.46,0.474), g (z) = z, x = C, [a,b ) = [ g (0.8), g () ) = [0.4722,0.474), g 6 (z) = z, x 6 = A, [a 6,b 6 ) = [ g 6 (0), g 6 (0.) ) = [0.4722,0.47), g 7 (z) = z, x 7 = A, [a 7,b 7 ) = [ g 7 (0), g 7 (0.) ) = [0.4722,0.4726), g 8 (z) = z, x 8 = B, [a 8,b 8 ) = [ g 8 (0.), g 8 (0.8) ) = [ ,0.4726), g 9 (z) = z, x 9 = A, g 0 (z) = z, x 0 = A, The midpoint M of this final interval is [a 9,b 9 ) = [ g 9 (0), g 9 (0.) ) = [ , ), [a 0,b 0 ) = [ g 0 (0), g 0 (0.) ) = [ , ). M = = , and the arithmetic code is M rounded to log 2 ( p(a) p(b) p(c ) 2) + = 6 bits. The arithmetic code is therefore the number C (x) = = , but we just store the 6 bits In this example the arithmetic code therefore uses.6 bits per symbol. In comparison the entropy is.49 bits per symbol. Example 0.0 (Decoding of an arithmetic code). Suppose we are given the arithmetic code 00 from example 0.29 together with the probabilities p(0) = 0.8 and p() = 0.2. We also assume that the length of the code is known, the probabilities, and how the probabilities were mapped into the interval [0, ]; this is the typical output of a program for arithmetic coding. Since we are going to do this manually, we start by converting the number to decimal; if we were to program arithmetic coding we would do everything in binary arithmetic. 6

17 The arithmetic code 00 corresponds to the binary number which is the decimal number z = Since this number lies in the interval [0,0.8) we know that the first symbol is x = 0. We now map the interval [0,0.8) and the code back to the interval [0,) with the function We find that the code becomes h (y) = y/0.8. z 2 = h (z ) = z /0.8 = relative to the new interval. This number lies in the interval [0,0.8) so the second symbol is x 2 = 0. Once again we map the current interval and arithmetic code back to [0,) with the function h 2 and obtain z = h 2 (z 2 ) = z 2 /0.8 = This number lies in the interval [0.8,), so our third symbol must be a x =. At the next step we must map the interval [0.8,) to [0,). From observation?? we see that this is done by the function h (y) = (y 0.8)/0.2. This means that the code is mapped to z 4 = h (z ) = (z 0.8)/0.2 = This brings us back to the interval [0,0.8), so the fourth symbol is x 4 = 0. This time we map back to [0,) with the function h 4 (y) = y/0.8 and obtain z = h 4 (z 4 ) = 0.942/0.8 = Since we remain in the interval [0,0.8) the fifth and last symbol is x = 0, so the original text was Example 0.. Let us test a naive compression strategy based on the above idea. The plots in figure?? illustrate the principle. A signal is shown in (a) and its DCT in (b). In (d) all values of the DCT with absolute value smaller than 0.02 have been set to zero. The signal can then be reconstructed with the inverse DCT of theorem??; the result of this is shown in (c). The two signals in (a) and (b) visually look almost the same even though the signal in (c) can be represented with less than 2 % of the information present in (a). We test this compression strategy on a data set that consists of points. We compute the DCT and set all values smaller than a suitable tolerance to 0. With a tolerance of 0.04, a total of 42 4 values are set to zero. When we then 7

18 reconstruct the sound with the inverse DCT, we obtain a signal that differs at most 0.09 from the original signal. We can store the signal by storing a gzip ed version of the DCT-values (as 2-bit floating-point numbers) of the perturbed signal. This gives a file with 622 bytes, which is 88 % of the gzip ed version of the original data. Example 0.2. Suppose we want to find the zero 2 with error less than 0 0 by solving the equation f (x) = x 2 2. We have f () = and f (2) = 2, so we can use the Bisection method, starting with the interval [,2]. To get the error to be smaller than 0 0, we know that N should be larger than ln(b a) lnɛ ln2 = 0ln0 ln Since N needs to be an integer this shows that N = is guaranteed to make the error smaller than 0 0. If we run algorithm?? we find m 0 = , m = , m 2 = ,. m = We have with eleven correct digits, and the actual error in m is approximately Example 0.. Let us see if the predictions above happen in practice. We test the Secant method on the function f (x) = x 2 2 and attempt to compute the zero c = We start with x 0 = 2 and x =. and obtain x , e , x , e , x , e , x , e.2 0 0, x , e This confirms the claim in observation??. Example 0.4. The equation is f (x) = x 2 2 which has the solution c = If we run Newton s method with the initial value x 0 =.7, 8

19 we find x , e , x , e , x , e , x , e.4 0 8, x , e We see that although we only use one starting value, which is further from the root than the best of the two starting values used with the Secant method, we still end up with a smaller error than with the Secant method after five iterations. Example 0.. As mentioned in the beginning of this chapter, it may be that the position of an object is known only at isolated instances in time. Assume that we have a file with GPS data. In the file we are looking at, the positions are stored in terms of elevation, latitude, and longitude. Essentially these are what we call spherical coordinates. From the spherical coordinates one can easily compute cartesian coordinates, and also coordinates in a system where the three axes point towards east, north and upwards, respectively, as in a 2D and D map. Accompanying time instances are also stored in the file. Since the derivative of the position with respect to time is the speed, with the time instances one can approximate the speed by taking the absolute value of the Newton difference quotient. The position is, however, given in terms of the three coordinates. If we call the corresponding samples x n, y n, z n, and we apply Newton s difference quotient to each of these at all time instances, we get vectors v x,n, v y,n, v z,n, representing approximations to the speed in different directions. The speed vector at time instace n is the vector (v x,n, v y,n, v z,n ), and we define the speed at time instance n as (v x,n, v y,n, v z,n ) = vx,n 2 + v 2 y,n + vz,n. 2 Let us test this on some actual GPS data. In Figure 2(a)) we have plotted the GPS data in a coordinate system where the axes represent the east and north directions. In this system we can t see the elevation information in the data. In (b) we have plotted the data in a system where the axis represent the east and upward directions instead. Finally, in (c) we have also plotted the speed using the approximation we obtain from Newton s difference quotient. When visualized together with geographical data, such as colour indicating sea, forest, or habitated areas, this gives very useful information. Example 0.6. Assume that we read samples from a digital sound file, and compute the Newton difference quotient for all samples in the file. The x-axis now represents time, and h is the sampling period (the difference in time between two time samples). We can consider the set of all Newton difference quotients 9

20 North 00 Up East (a) The x axis points east, the y-axis points north East (b) The x axis points east, the y-axis points upwards x 0 4 (c) Time plotted againts speed Figure 2: Experiments with GPS data in a file as samples in another sound, and we can listen to it. When we do this we hear a sound where the bass has been reduced. To see why, in chapter?? we argued that we could reduce the bass in sound by using a row in Pascals triangle with alternating sign, and (,) is the first row in Pascals triangle. But f (a+h) f (a) = h(f (a +h) f (a))/h, so that the Newton difference quotient is equivalent to the procedure for reducing bass, up to multiplication with a constant. In summary, when we differentiate a sound we reduce the bass in the sound. Example 0.7. Let us test the approximation (??) for the function f (x) = sin x at a = 0. (using 64-bit floating-point numbers). In this case we know that the exact derivative is f (x) = cos x so f (a) with 0 correct digits. This makes it is easy to check the accuracy of the numerical method. We try 20

21 with a few values of h and find h ( f (a + h) f (a) )/ h E(f ; a,h) where E(f ; a,h) = f (a) ( f (a+h) f (a) )/ h. We observe that the approximation improves with decreasing h, as expected. More precisely, when h is reduced by a factor of 0, the error is reduced by the same factor. Example 0.8. Let us check that the error formula (??) agrees with the numerical values in example 0.8. We have f (x) = sin x, so the right-hand side in (??) becomes E(sin;0.,h) = h 2 sinξ h, where ξ h (0.,0.+h). We do not know the exact value of ξ h, but for the values of h in question, we know that sin x is monotone on this interval. For h = 0. we therefore have that the error must lie in the interval [0.0sin0., 0.0sin0.6] = [ , ], and we see that the right end point of the interval is the maximum value of the right-hand side in (??). When h is reduced by a factor of 0, the number h/2 is reduced by the same factor, while ξ h is restricted to an interval whose width is also reduced by a factor of 0. As h becomes even smaller, the number ξ h will approach 0. so sinξ h will approach the lower value sin For h = 0 n, the error will therefore tend to 0 n 2 sin n, which is in close agreement with the numbers computed in example 0.8. Example 0.9. Recall that we estimated the derivative of f (x) = sin x at a = 0. and that the correct value with ten digits is f (0.) If we check 2

22 values of h for 0 7 and smaller we find ( )/ h f (a + h) f (a) h E(f ; a,h) This shows very clearly that something quite dramatic happens. when we come to h = 0 7, the derivative is computed as zero. Ultimately, Example We test the approximation (??) with the same values of h as in examples 0.8 and Recall that f (0.) with 0 correct digits. The results are ( )/ h f (a + h) f (a h) (2h) E(f ; a,h) If we compare with examples 0.8 and 0.40, the errors are generally smaller for the same value of h. In particular we note that when h is reduced by a factor of 0, the error is reduced by a factor of 00, at least as long as h is not too small. However, when h becomes smaller than about 0 6, the error starts to increase. It therefore seems like the truncation error is smaller than for the original method based on Newton s quotient, but as before, the round-off error makes it impossible to get accurate results for small values of h. The optimal value of h seems to be h 0 6, which is larger than for the first method, but the error is then about 0 2, which is smaller than the best we could do with the asymmetric Newton s quotient. 22

23 Example 0.4. Let us try the midpoint rule on an example. As usual, it is wise to test on an example where we know the answer, so we can easily check the quality of the method. We choose the integral 0 cos x dx = sin where the exact answer is easy to compute by traditional, symbolic methods. To test the method, we split the interval into 2 k subintervals, for k =, 2,..., 0, i.e., we halve the step length each time. The result is h I mid (h) Error By error, we here mean 0 f (x)dx I mid (h). Note that each time the step length is halved, the error seems to be reduced by a factor of 4. Example We test the trapezoidal rule on the same example as the midpoint rule, 0 cos x dx = sin As in example 0.42 we split the interval into 2 k subintervals, for k =, 2,..., 0. 2

24 The resulting approximations are where the error is defined by h I trap (h) Error f (x)dx I trap (h). We note that each time the step length is halved, the error is reduced by a factor of 4, just as for the midpoint rule. But we also note that even though we now use two function values in each subinterval to estimate the integral, the error is actually twice as big as it was for the midpoint rule. Example 0.4. Let us test Simpson s rule on the same example as the midpoint rule and the trapezoidal rule, 0 cos x dx = sin As in example 0.42, we split the interval into 2 k subintervals, for k =, 2,..., 0. The result is h I Simp (h) Error

25 where the error is defined by 0 f (x)dx I Simp (h). When we compare this table with examples 0.42 and 0.4, we note that the error is now much smaller. We also note that each time the step length is halved, the error is reduced by a factor of 6. In other words, by introducing one more function evaluation in each subinterval, we have obtained a method with much better accuracy. This will be quite evident when we analyse the error below. Example We consider the differential equation x = t 2x, x(0) = 0.2. (6) Suppose we want to compute an approximation to the solution at the points t = 0., t 2 = 0.2,..., t 0 =, i.e., the points t k = kh for k =, 2,..., 0, with h = 0.. We start with the initial point (t 0, x 0 ) = (0,0.2) and note that x 0 = x (0) = 0 2x(0) = 0.. The tangent T 0 (t) to the solution at t = 0 is therefore given by T 0 (t) = x(0) + t x (0) = t. To advance the approximate solution to t = 0., we just follow this tangent, x(0.) x = T 0 (0.) = = 0.2. At (t, x ) = (0.,0.2) the derivative is x = f (t, x ) = t 2x = = 0.99, so the tangent at t is T (t) = x + (t t )x = x + (t t )f (t, x ) = 0.2 (t 0.)0.99. The approximation at t 2 is therefore x(0.2) x 2 = T (0.2) = x + h f (t, x ) = = If we continue in the same way, we find (we only print the first 4 decimals) x = 0.289, x 4 = 0.08, x = 0.090, x 6 = 0.08, x 7 = , x 8 = 0.062, x 9 = 0.62, x 0 = This is illustrated in figure?? where the computed points are connected by straight line segments. 2

26 Example 0.4. Let us consider the differential equation x = f (t, x) = F (t, x) = t, x(0) =, (7) + x which we want to solve on the interval [0,]. To illustrate the method, we choose a large step length h = 0. and attempt to find an approximate numerical solution at x = 0. and x = using a quadratic Taylor method. From (20) we obtain x (t) = F 2 (t, x) = + x (t) ( + x(t) ) 2. (8) To compute an approximation to x(h) we use the quadratic Taylor polynomial x(h) x = x(0) + hx (0) + h2 2 x (0). The differential equation (20) and (2) give us the values which leads to the approximation x(0) = x 0 =, x (0) = x 0 = 0 /2 = /2, x (0) = x 0 = /8 = 7/8, x(h) x = x 0 + hx 0 + h2 2 x 0 = h 2 + 7h2 6 = To prepare for the next step we need to determine approximations to x (h) and x (h) as well. From the differential equation (20) and (2) we find x (h) x = F (t, x ) = t /( + x ) = , x (h) x = F 2(t, x ) = + x /( + x ) 2 = , rounded to eight digits. From this we can compute the approximation x() = x(2h) x 2 = x + hx + h2 2 x = The result is shown in figure??a. 26

27 Example 0.46 (Euler s method for a system). We consider the equations in example??, x = f (t, x), x(a) = x 0, where f (t, x) = ( f (t, x, x 2, x ), f 2 (t, x, x 2, x ), f (t, x, x 2, x ) ) = (x x 2 + cos x,2 t 2 + x 2 x 2,sin t x + x 2 ). Euler s method is easily generalised to vector equations as x k+ = x k + h f (t k, x k ), k = 0,,..., n. (9) If we write out the three components explicitly, this becomes x k+ = x k + h f (t k, x k, xk 2, xk ) = xk + h( x k xk 2 + cos ) xk, x2 k+ = x2 k + h f 2(t k, x k, xk 2, xk ) = xk 2 + h( 2 t 2 k + (xk )2 x2 k ), x k+ = x k + h f (t k, x k, xk 2, xk ) = xk + h( sin t k x k + ) xk 2, (20) for k = 0,,..., n, with the starting values (a, x 0, x0 2, x0 ) given by the initial condition. Although they look rather complicated, these formulas can be programmed quite easily. The trick is to make use of the vector notation in (22), since it nicely hides the details in (2). Example 0.47 (System of higher order equations). Consider the system of differential equations given by x = t + x + y, x(0) =, x (0) = 2, y = x y + x, y(0) =, y (0) =, y (0) = 2. We introduce the new functions x = x, x 2 = x, y = y, y 2 = y, and y = y. Then the above system can be written as x = x 2, x (0) =, x 2 = t + x 2 + y 2, x 2 (0) = 2, y = y 2, y (0) =, y 2 = y, y 2 (0) =, y = x 2y + x, y (0) = 2. 27

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Exercises MAT-INF1100. Øyvind Ryan

Exercises MAT-INF1100. Øyvind Ryan Exercises MAT-INF1100 Øyvind Ryan February 19, 2013 1. Formulate an algorithm for adding two three-digit numbers. You may assume that it is known how to sum one-digit numbers. Answer: We represent the

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460 Notes for Part 1 of CMSC 460 Dianne P. O Leary Preliminaries: Mathematical modeling Computer arithmetic Errors 1999-2006 Dianne P. O'Leary 1 Arithmetic and Error What we need to know about error: -- how

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Introduction to Numerical Analysis S. Baskar and S. Sivaji Ganesh Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai 400 076. Introduction to Numerical Analysis Lecture Notes

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Computer Representation of Numbers Counting numbers (unsigned integers) are the numbers 0,

More information

CHAPTER 12 Numerical Solution of Differential Equations

CHAPTER 12 Numerical Solution of Differential Equations CHAPTER 12 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown

More information

2018 LEHIGH UNIVERSITY HIGH SCHOOL MATH CONTEST

2018 LEHIGH UNIVERSITY HIGH SCHOOL MATH CONTEST 08 LEHIGH UNIVERSITY HIGH SCHOOL MATH CONTEST. A right triangle has hypotenuse 9 and one leg. What is the length of the other leg?. Don is /3 of the way through his run. After running another / mile, he

More information

Numerical Solution of Differential

Numerical Solution of Differential Chapter 14 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown

More information

Numerical Analysis Exam with Solutions

Numerical Analysis Exam with Solutions Numerical Analysis Exam with Solutions Richard T. Bumby Fall 000 June 13, 001 You are expected to have books, notes and calculators available, but computers of telephones are not to be used during the

More information

Chapter 1: Introduction and mathematical preliminaries

Chapter 1: Introduction and mathematical preliminaries Chapter 1: Introduction and mathematical preliminaries Evy Kersalé September 26, 2011 Motivation Most of the mathematical problems you have encountered so far can be solved analytically. However, in real-life,

More information

Numbering Systems. Contents: Binary & Decimal. Converting From: B D, D B. Arithmetic operation on Binary.

Numbering Systems. Contents: Binary & Decimal. Converting From: B D, D B. Arithmetic operation on Binary. Numbering Systems Contents: Binary & Decimal. Converting From: B D, D B. Arithmetic operation on Binary. Addition & Subtraction using Octal & Hexadecimal 2 s Complement, Subtraction Using 2 s Complement.

More information

Notes on floating point number, numerical computations and pitfalls

Notes on floating point number, numerical computations and pitfalls Notes on floating point number, numerical computations and pitfalls November 6, 212 1 Floating point numbers An n-digit floating point number in base β has the form x = ±(.d 1 d 2 d n ) β β e where.d 1

More information

Example: x 10-2 = ( since 10 2 = 100 and [ 10 2 ] -1 = 1 which 100 means divided by 100)

Example: x 10-2 = ( since 10 2 = 100 and [ 10 2 ] -1 = 1 which 100 means divided by 100) Scientific Notation When we use 10 as a factor 2 times, the product is 100. 10 2 = 10 x 10 = 100 second power of 10 When we use 10 as a factor 3 times, the product is 1000. 10 3 = 10 x 10 x 10 = 1000 third

More information

1. Basics of Information

1. Basics of Information 1. Basics of Information 6.004x Computation Structures Part 1 Digital Circuits Copyright 2015 MIT EECS 6.004 Computation Structures L1: Basics of Information, Slide #1 What is Information? Information,

More information

Number Representation and Waveform Quantization

Number Representation and Waveform Quantization 1 Number Representation and Waveform Quantization 1 Introduction This lab presents two important concepts for working with digital signals. The first section discusses how numbers are stored in memory.

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

2009 Math Olympics Level II Solutions

2009 Math Olympics Level II Solutions Saginaw Valley State University 009 Math Olympics Level II Solutions 1. f (x) is a degree three monic polynomial (leading coefficient is 1) such that f (0) 3, f (1) 5 and f () 11. What is f (5)? (a) 7

More information

Shannon-Fano-Elias coding

Shannon-Fano-Elias coding Shannon-Fano-Elias coding Suppose that we have a memoryless source X t taking values in the alphabet {1, 2,..., L}. Suppose that the probabilities for all symbols are strictly positive: p(i) > 0, i. The

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Hanoi Open Mathematical Competition 2017

Hanoi Open Mathematical Competition 2017 Hanoi Open Mathematical Competition 2017 Junior Section Saturday, 4 March 2017 08h30-11h30 Important: Answer to all 15 questions. Write your answers on the answer sheets provided. For the multiple choice

More information

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University Huffman Coding C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)573877 cmliu@cs.nctu.edu.tw

More information

Chapter 1: Preliminaries and Error Analysis

Chapter 1: Preliminaries and Error Analysis Chapter 1: Error Analysis Peter W. White white@tarleton.edu Department of Tarleton State University Summer 2015 / Numerical Analysis Overview We All Remember Calculus Derivatives: limit definition, sum

More information

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Compression Motivation Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet) Storage: Store large & complex 3D models (e.g. 3D scanner

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

How do computers represent numbers?

How do computers represent numbers? How do computers represent numbers? Tips & Tricks Week 1 Topics in Scientific Computing QMUL Semester A 2017/18 1/10 What does digital mean? The term DIGITAL refers to any device that operates on discrete

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 1: Entropy Coding Lecture 4: Asymmetric Numeral Systems Juha Kärkkäinen 08.11.2017 1 / 19 Asymmetric Numeral Systems Asymmetric numeral systems (ANS) is a recent entropy

More information

MATH Dr. Halimah Alshehri Dr. Halimah Alshehri

MATH Dr. Halimah Alshehri Dr. Halimah Alshehri MATH 1101 haalshehri@ksu.edu.sa 1 Introduction To Number Systems First Section: Binary System Second Section: Octal Number System Third Section: Hexadecimal System 2 Binary System 3 Binary System The binary

More information

27 Wyner Math 2 Spring 2019

27 Wyner Math 2 Spring 2019 27 Wyner Math 2 Spring 2019 CHAPTER SIX: POLYNOMIALS Review January 25 Test February 8 Thorough understanding and fluency of the concepts and methods in this chapter is a cornerstone to success in the

More information

CSEN102 Introduction to Computer Science

CSEN102 Introduction to Computer Science CSEN102 Introduction to Computer Science Lecture 7: Representing Information I Prof. Dr. Slim Abdennadher Dr. Mohammed Salem, slim.abdennadher@guc.edu.eg, mohammed.salem@guc.edu.eg German University Cairo,

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

MITOCW MITRES18_006F10_26_0602_300k-mp4

MITOCW MITRES18_006F10_26_0602_300k-mp4 MITOCW MITRES18_006F10_26_0602_300k-mp4 FEMALE VOICE: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational

More information

GRE Quantitative Reasoning Practice Questions

GRE Quantitative Reasoning Practice Questions GRE Quantitative Reasoning Practice Questions y O x 7. The figure above shows the graph of the function f in the xy-plane. What is the value of f (f( ))? A B C 0 D E Explanation Note that to find f (f(

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

CSEP 590 Data Compression Autumn Arithmetic Coding

CSEP 590 Data Compression Autumn Arithmetic Coding CSEP 590 Data Compression Autumn 2007 Arithmetic Coding Reals in Binary Any real number x in the interval [0,1) can be represented in binary as.b 1 b 2... where b i is a bit. x 0 0 1 0 1... binary representation

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9. ( c ) E p s t e i n, C a r t e r, B o l l i n g e r, A u r i s p a C h a p t e r 17: I n f o r m a t i o n S c i e n c e P a g e 1 CHAPTER 17: Information Science 17.1 Binary Codes Normal numbers we use

More information

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,

More information

SECTION 2.3: LONG AND SYNTHETIC POLYNOMIAL DIVISION

SECTION 2.3: LONG AND SYNTHETIC POLYNOMIAL DIVISION 2.25 SECTION 2.3: LONG AND SYNTHETIC POLYNOMIAL DIVISION PART A: LONG DIVISION Ancient Example with Integers 2 4 9 8 1 In general: dividend, f divisor, d We can say: 9 4 = 2 + 1 4 By multiplying both sides

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

Elements of Floating-point Arithmetic

Elements of Floating-point Arithmetic Elements of Floating-point Arithmetic Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Floating-point Numbers Representations IEEE Floating-point Standards Underflow

More information

Conversions between Decimal and Binary

Conversions between Decimal and Binary Conversions between Decimal and Binary Binary to Decimal Technique - use the definition of a number in a positional number system with base 2 - evaluate the definition formula ( the formula ) using decimal

More information

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1 Kraft s inequality An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if N 2 l i 1 Proof: Suppose that we have a tree code. Let l max = max{l 1,...,

More information

Binary floating point

Binary floating point Binary floating point Notes for 2017-02-03 Why do we study conditioning of problems? One reason is that we may have input data contaminated by noise, resulting in a bad solution even if the intermediate

More information

8.3 Partial Fraction Decomposition

8.3 Partial Fraction Decomposition 8.3 partial fraction decomposition 575 8.3 Partial Fraction Decomposition Rational functions (polynomials divided by polynomials) and their integrals play important roles in mathematics and applications,

More information

Radiological Control Technician Training Fundamental Academic Training Study Guide Phase I

Radiological Control Technician Training Fundamental Academic Training Study Guide Phase I Module 1.01 Basic Mathematics and Algebra Part 4 of 9 Radiological Control Technician Training Fundamental Academic Training Phase I Coordinated and Conducted for the Office of Health, Safety and Security

More information

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding SIGNAL COMPRESSION Lecture 3 4.9.2007 Shannon-Fano-Elias Codes and Arithmetic Coding 1 Shannon-Fano-Elias Coding We discuss how to encode the symbols {a 1, a 2,..., a m }, knowing their probabilities,

More information

8.3 Numerical Quadrature, Continued

8.3 Numerical Quadrature, Continued 8.3 Numerical Quadrature, Continued Ulrich Hoensch Friday, October 31, 008 Newton-Cotes Quadrature General Idea: to approximate the integral I (f ) of a function f : [a, b] R, use equally spaced nodes

More information

Introduction CSE 541

Introduction CSE 541 Introduction CSE 541 1 Numerical methods Solving scientific/engineering problems using computers. Root finding, Chapter 3 Polynomial Interpolation, Chapter 4 Differentiation, Chapter 4 Integration, Chapters

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Floating Point Number Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Floating Point Number Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview Real number system Examples Absolute and relative errors Floating point numbers Roundoff

More information

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract What Every Programmer Should Know About Floating-Point Arithmetic Last updated: November 3, 2014 Abstract The article provides simple answers to the common recurring questions of novice programmers about

More information

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code

Chapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code Chapter 2 Date Compression: Source Coding 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code 2.1 An Introduction to Source Coding Source coding can be seen as an efficient way

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

12/31/2010. Digital Operations and Computations Course Notes. 01-Number Systems Text: Unit 1. Overview. What is a Digital System?

12/31/2010. Digital Operations and Computations Course Notes. 01-Number Systems Text: Unit 1. Overview. What is a Digital System? Digital Operations and Computations Course Notes 0-Number Systems Text: Unit Winter 20 Professor H. Louie Department of Electrical & Computer Engineering Seattle University ECEGR/ISSC 20 Digital Operations

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

N= {1,2,3,4,5,6,7,8,9,10,11,...}

N= {1,2,3,4,5,6,7,8,9,10,11,...} 1.1: Integers and Order of Operations 1. Define the integers 2. Graph integers on a number line. 3. Using inequality symbols < and > 4. Find the absolute value of an integer 5. Perform operations with

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Rational Numbers CHAPTER. 1.1 Introduction

Rational Numbers CHAPTER. 1.1 Introduction RATIONAL NUMBERS Rational Numbers CHAPTER. Introduction In Mathematics, we frequently come across simple equations to be solved. For example, the equation x + = () is solved when x =, because this value

More information

Hermite Interpolation

Hermite Interpolation Jim Lambers MAT 77 Fall Semester 010-11 Lecture Notes These notes correspond to Sections 4 and 5 in the text Hermite Interpolation Suppose that the interpolation points are perturbed so that two neighboring

More information

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1

Tu: 9/3/13 Math 471, Fall 2013, Section 001 Lecture 1 Tu: 9/3/13 Math 71, Fall 2013, Section 001 Lecture 1 1 Course intro Notes : Take attendance. Instructor introduction. Handout : Course description. Note the exam days (and don t be absent). Bookmark the

More information

Essentials of Intermediate Algebra

Essentials of Intermediate Algebra Essentials of Intermediate Algebra BY Tom K. Kim, Ph.D. Peninsula College, WA Randy Anderson, M.S. Peninsula College, WA 9/24/2012 Contents 1 Review 1 2 Rules of Exponents 2 2.1 Multiplying Two Exponentials

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Sequences and Series

Sequences and Series Sequences and Series What do you think of when you read the title of our next unit? In case your answers are leading us off track, let's review the following IB problems. 1 November 2013 HL 2 3 November

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435

PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 PART I Lecture Notes on Numerical Solution of Root Finding Problems MATH 435 Professor Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University DeKalb, IL. 60115 USA E mail: dattab@math.niu.edu

More information

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n

Order of convergence. MA3232 Numerical Analysis Week 3 Jack Carl Kiefer ( ) Question: How fast does x n Week 3 Jack Carl Kiefer (94-98) Jack Kiefer was an American statistician. Much of his research was on the optimal design of eperiments. However, he also made significant contributions to other areas of

More information

1. Introduction to commutative rings and fields

1. Introduction to commutative rings and fields 1. Introduction to commutative rings and fields Very informally speaking, a commutative ring is a set in which we can add, subtract and multiply elements so that the usual laws hold. A field is a commutative

More information

Decimal Addition: Remember to line up the decimals before adding. Bring the decimal straight down in your answer.

Decimal Addition: Remember to line up the decimals before adding. Bring the decimal straight down in your answer. Summer Packet th into 6 th grade Name Addition Find the sum of the two numbers in each problem. Show all work.. 62 2. 20. 726 + + 2 + 26 + 6 6 Decimal Addition: Remember to line up the decimals before

More information

Mathematics Review. Sid Rudolph

Mathematics Review. Sid Rudolph Physics 2010 Sid Rudolph General Physics Mathematics Review These documents in mathematics are intended as a brief review of operations and methods. Early in this course, you should be totally familiar

More information

Chapter 1 Error Analysis

Chapter 1 Error Analysis Chapter 1 Error Analysis Several sources of errors are important for numerical data processing: Experimental uncertainty: Input data from an experiment have a limited precision. Instead of the vector of

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation

More information

Math Review. for the Quantitative Reasoning measure of the GRE General Test

Math Review. for the Quantitative Reasoning measure of the GRE General Test Math Review for the Quantitative Reasoning measure of the GRE General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important for solving

More information

X. Numerical Methods

X. Numerical Methods X. Numerical Methods. Taylor Approximation Suppose that f is a function defined in a neighborhood of a point c, and suppose that f has derivatives of all orders near c. In section 5 of chapter 9 we introduced

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

Math 016 Lessons Wimayra LUY

Math 016 Lessons Wimayra LUY Math 016 Lessons Wimayra LUY wluy@ccp.edu MATH 016 Lessons LESSON 1 Natural Numbers The set of natural numbers is given by N = {0, 1, 2, 3, 4...}. Natural numbers are used for two main reasons: 1. counting,

More information

8/13/16. Data analysis and modeling: the tools of the trade. Ø Set of numbers. Ø Binary representation of numbers. Ø Floating points.

8/13/16. Data analysis and modeling: the tools of the trade. Ø Set of numbers. Ø Binary representation of numbers. Ø Floating points. Data analysis and modeling: the tools of the trade Patrice Koehl Department of Biological Sciences National University of Singapore http://www.cs.ucdavis.edu/~koehl/teaching/bl5229 koehl@cs.ucdavis.edu

More information

r=1 Our discussion will not apply to negative values of r, since we make frequent use of the fact that for all non-negative numbers x and t

r=1 Our discussion will not apply to negative values of r, since we make frequent use of the fact that for all non-negative numbers x and t Chapter 2 Some Area Calculations 2.1 The Area Under a Power Function Let a be a positive number, let r be a positive number, and let S r a be the set of points (x, y) in R 2 such that 0 x a and 0 y x r.

More information

Power series and Taylor series

Power series and Taylor series Power series and Taylor series D. DeTurck University of Pennsylvania March 29, 2018 D. DeTurck Math 104 002 2018A: Series 1 / 42 Series First... a review of what we have done so far: 1 We examined series

More information

SOUTH AFRICAN TERTIARY MATHEMATICS OLYMPIAD

SOUTH AFRICAN TERTIARY MATHEMATICS OLYMPIAD SOUTH AFRICAN TERTIARY MATHEMATICS OLYMPIAD. Determine the following value: 7 August 6 Solutions π + π. Solution: Since π

More information

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

NUMERICAL MATHEMATICS & COMPUTING 6th Edition NUMERICAL MATHEMATICS & COMPUTING 6th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 September 1, 2011 2011 1 / 42 1.1 Mathematical

More information

Finite Mathematics : A Business Approach

Finite Mathematics : A Business Approach Finite Mathematics : A Business Approach Dr. Brian Travers and Prof. James Lampes Second Edition Cover Art by Stephanie Oxenford Additional Editing by John Gambino Contents What You Should Already Know

More information

MAT 460: Numerical Analysis I. James V. Lambers

MAT 460: Numerical Analysis I. James V. Lambers MAT 460: Numerical Analysis I James V. Lambers January 31, 2013 2 Contents 1 Mathematical Preliminaries and Error Analysis 7 1.1 Introduction............................ 7 1.1.1 Error Analysis......................

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

Information and Entropy. Professor Kevin Gold

Information and Entropy. Professor Kevin Gold Information and Entropy Professor Kevin Gold What s Information? Informally, when I communicate a message to you, that s information. Your grade is 100/100 Information can be encoded as a signal. Words

More information

Mathematics for Health and Physical Sciences

Mathematics for Health and Physical Sciences 1 Mathematics for Health and Physical Sciences Collection edited by: Wendy Lightheart Content authors: Wendy Lightheart, OpenStax, Wade Ellis, Denny Burzynski, Jan Clayton, and John Redden Online:

More information

hexadecimal-to-decimal conversion

hexadecimal-to-decimal conversion OTHER NUMBER SYSTEMS: octal (digits 0 to 7) group three binary numbers together and represent as base 8 3564 10 = 110 111 101 100 2 = (6X8 3 ) + (7X8 2 ) + (5X8 1 ) + (4X8 0 ) = 6754 8 hexadecimal (digits

More information

8.7 MacLaurin Polynomials

8.7 MacLaurin Polynomials 8.7 maclaurin polynomials 67 8.7 MacLaurin Polynomials In this chapter you have learned to find antiderivatives of a wide variety of elementary functions, but many more such functions fail to have an antiderivative

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

1. Introduction to commutative rings and fields

1. Introduction to commutative rings and fields 1. Introduction to commutative rings and fields Very informally speaking, a commutative ring is a set in which we can add, subtract and multiply elements so that the usual laws hold. A field is a commutative

More information

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self-paced Course MODULE ALGEBRA Module Topics Simplifying expressions and algebraic functions Rearranging formulae Indices 4 Rationalising a denominator

More information

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions. Partial Fractions June 7, 04 In this section, we will learn to integrate another class of functions: the rational functions. Definition. A rational function is a fraction of two polynomials. For example,

More information

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression

CSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression CSEP 52 Applied Algorithms Spring 25 Statistical Lossless Data Compression Outline for Tonight Basic Concepts in Data Compression Entropy Prefix codes Huffman Coding Arithmetic Coding Run Length Coding

More information