Algorithms. What is an algorithm?
|
|
- Griffin Hunter
- 5 years ago
- Views:
Transcription
1
2 What is an algorithm?
3 What is an algorithm? Informally, an algorithm is a well-defined finite set of rules that specifies a sequential series of elementary operations to be applied to some data called the input, producing after a finite amount of time some data called the output.
4 What is an algorithm? Informally, an algorithm is a well-defined finite set of rules that specifies a sequential series of elementary operations to be applied to some data called the input, producing after a finite amount of time some data called the output. The oldest non-trivial algorithm, that has survived to the present day, is the Euclidean algorithm, named after the Greek mathematician Euclid (fl. 300 BC), for computing the greatest common divisor of two natural numbers.
5 What is an algorithm? Informally, an algorithm is a well-defined finite set of rules that specifies a sequential series of elementary operations to be applied to some data called the input, producing after a finite amount of time some data called the output. The oldest non-trivial algorithm, that has survived to the present day, is the Euclidean algorithm, named after the Greek mathematician Euclid (fl. 300 BC), for computing the greatest common divisor of two natural numbers. The word Algorithm derives from the name of a Persian mathematician al-khwārizmī (c c. 850).
6 What is an algorithm? Informally, an algorithm is a well-defined finite set of rules that specifies a sequential series of elementary operations to be applied to some data called the input, producing after a finite amount of time some data called the output. The oldest non-trivial algorithm, that has survived to the present day, is the Euclidean algorithm, named after the Greek mathematician Euclid (fl. 300 BC), for computing the greatest common divisor of two natural numbers. The word Algorithm derives from the name of a Persian mathematician al-khwārizmī (c c. 850). An algorithmic solution to a computational problem will usually involve designing an algorithm, and then analysing its performance.
7
8 What is a computational problem?
9 What is a computational problem? The 15th century Italian mathematician Leonardo Fibonacci is known for his famous sequence of numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,...,
10 What is a computational problem? The 15th century Italian mathematician Leonardo Fibonacci is known for his famous sequence of numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,..., each the sum of its two immediate predecessors.
11 What is a computational problem? The 15th century Italian mathematician Leonardo Fibonacci is known for his famous sequence of numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,..., each the sum of its two immediate predecessors. More formally, F n 1 + F n 2 if n > 1 F n = 1 if n = 1 0 if n = 0.
12 What is a computational problem? The 15th century Italian mathematician Leonardo Fibonacci is known for his famous sequence of numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,..., each the sum of its two immediate predecessors. More formally, F n 1 + F n 2 if n > 1 F n = 1 if n = 1 0 if n = 0. But what is the precise value of F 100, or of F 200? Fibonacci himself would surely have wanted to know such things.
13 What is a computational problem? The 15th century Italian mathematician Leonardo Fibonacci is known for his famous sequence of numbers 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,..., each the sum of its two immediate predecessors. More formally, F n 1 + F n 2 if n > 1 F n = 1 if n = 1 0 if n = 0. But what is the precise value of F 100, or of F 200? Fibonacci himself would surely have wanted to know such things. To answer, we need to design an algorithm for computing the nth Fibonacci number.
14 One approach is to implement the recursive definition of F n.
15 One approach is to implement the recursive definition of F n. Algorithm FIB1(n) 1: if n = 0 then 2: return 0 3: end if 4: if n = 1 then 5: return 1 6: end if 7: return FIB1(n 1) + FIB1(n 2)
16 One approach is to implement the recursive definition of F n. Algorithm FIB1(n) 1: if n = 0 then 2: return 0 3: end if 4: if n = 1 then 5: return 1 6: end if 7: return FIB1(n 1) + FIB1(n 2) There are three questions we always ask about it: 1. Is it correct? 2. How much time does it take, as a function of n? 3. And can we do better?
17 One approach is to implement the recursive definition of F n. Algorithm FIB1(n) 1: if n = 0 then 2: return 0 3: end if 4: if n = 1 then 5: return 1 6: end if 7: return FIB1(n 1) + FIB1(n 2) There are three questions we always ask about it: 1. Is it correct? 2. How much time does it take, as a function of n? 3. And can we do better? The algorithm is definitely correct. It is the definition of F n.
18 Recursive Fibonacci The recursive calls of FIB(6) From CLRS Introduction to Algorithms Chapter 27
19 How much time does FIB1(n) take, as a function of n?
20 How much time does FIB1(n) take, as a function of n? Let T (n) be the number of computer steps needed to compute FIB1(n); what can we say about this function?
21 How much time does FIB1(n) take, as a function of n? Let T (n) be the number of computer steps needed to compute FIB1(n); what can we say about this function? For starters, if n is less than 2, the procedure halts almost immediately, after just a couple of steps.
22 How much time does FIB1(n) take, as a function of n? Let T (n) be the number of computer steps needed to compute FIB1(n); what can we say about this function? For starters, if n is less than 2, the procedure halts almost immediately, after just a couple of steps. T (n) 2 for n 1.
23 How much time does FIB1(n) take, as a function of n? Let T (n) be the number of computer steps needed to compute FIB1(n); what can we say about this function? For starters, if n is less than 2, the procedure halts almost immediately, after just a couple of steps. T (n) 2 for n 1. For larger values of n, there are two recursive invocations of FIB1: one taking time T (n 1) and one taking time T (n 2); plus three other steps (check the value of n and a final addition). T (n) = T (n 1) + T (n 2) + 3 for n > 1.
24
25 Compare this to the recurrence relation for F n : we immediately see that T (n) F n.
26 Compare this to the recurrence relation for F n : we immediately see that T (n) F n. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers. The Fibonacci numbers grow exponentially (not proved here).
27 Compare this to the recurrence relation for F n : we immediately see that T (n) F n. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers. The Fibonacci numbers grow exponentially (not proved here). T (n) is exponential in n, which implies that the algorithm is impractically slow except for small values of n.
28 Compare this to the recurrence relation for F n : we immediately see that T (n) F n. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers. The Fibonacci numbers grow exponentially (not proved here). T (n) is exponential in n, which implies that the algorithm is impractically slow except for small values of n. For example, to compute F 200, algorithm FIB1 executes T (n) F elementary computer steps.
29 Compare this to the recurrence relation for F n : we immediately see that T (n) F n. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers. The Fibonacci numbers grow exponentially (not proved here). T (n) is exponential in n, which implies that the algorithm is impractically slow except for small values of n. For example, to compute F 200, algorithm FIB1 executes T (n) F elementary computer steps. Even on a very fast machine, FIB1(200) would take more than 2 92 seconds.
30 Compare this to the recurrence relation for F n : we immediately see that T (n) F n. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers. The Fibonacci numbers grow exponentially (not proved here). T (n) is exponential in n, which implies that the algorithm is impractically slow except for small values of n. For example, to compute F 200, algorithm FIB1 executes T (n) F elementary computer steps. Even on a very fast machine, FIB1(200) would take more than 2 92 seconds. If we start the computation today, it would still be unfinished long after the sun turns into a red giant star.
31 Compare this to the recurrence relation for F n : we immediately see that T (n) F n. This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers. The Fibonacci numbers grow exponentially (not proved here). T (n) is exponential in n, which implies that the algorithm is impractically slow except for small values of n. For example, to compute F 200, algorithm FIB1 executes T (n) F elementary computer steps. Even on a very fast machine, FIB1(200) would take more than 2 92 seconds. If we start the computation today, it would still be unfinished long after the sun turns into a red giant star. The algorithm is correct, but can we do better?
32
33 A faster approach could be store the intermediate results: the values F 0, F 1,..., F n 1.
34 A faster approach could be store the intermediate results: the values F 0, F 1,..., F n 1. Algorithm FIB2(n) 1: if n 1 then 2: return n 3: end if 4: f [0... n] 0 5: f [0] 0 6: f [1] 1 7: for all i [2, n] do 8: f [i] f [i 1] + f [i 2] 9: end for 10: return f [n]
35 A faster approach could be store the intermediate results: the values F 0, F 1,..., F n 1. Algorithm FIB2(n) 1: if n 1 then 2: return n 3: end if 4: f [0... n] 0 5: f [0] 0 6: f [1] 1 7: for all i [2, n] do 8: f [i] f [i 1] + f [i 2] 9: end for 10: return f [n] The correctness of this algorithm follows by definition of F n.
36 Algorithm FIB2
37 Algorithm FIB2 How long does it take?
38 Algorithm FIB2 How long does it take? The for loop (lines 7-8) consists of a single computer step and is executed n 1 times.
39 Algorithm FIB2 How long does it take? The for loop (lines 7-8) consists of a single computer step and is executed n 1 times. Therefore the number of computer steps used by FIB2(n) is linear in n.
40 Algorithm FIB2 How long does it take? The for loop (lines 7-8) consists of a single computer step and is executed n 1 times. Therefore the number of computer steps used by FIB2(n) is linear in n. From exponential we are down to polynomial, a huge breakthrough in running time.
41 Algorithm FIB2 How long does it take? The for loop (lines 7-8) consists of a single computer step and is executed n 1 times. Therefore the number of computer steps used by FIB2(n) is linear in n. From exponential we are down to polynomial, a huge breakthrough in running time. It is now perfectly reasonable to compute F 200 or even F 200,000.
42 Brief discussion What was the difference in design between these two algorithms? The second algorithm is faster than the first when run on a single processor. The first algorithm is recursive, and used a divide-and-conquer approach to split the problem into sub-problems The second algorithm is completely sequential, building F n from our previous knowledge of F n 1 and F n 2. The sequential running time of the second algorithm is best possible (optimal). We need n steps to compute F n.
43 Running time of Algorithms
44 Running time of Algorithms Instead of reporting that an algorithm takes, say, 5n 3 + 4n + 3 steps on an input of size n, it is much simpler to leave out lower-order terms such as 4n and 3 (which become insignificant as n grows).
45 Running time of Algorithms Instead of reporting that an algorithm takes, say, 5n 3 + 4n + 3 steps on an input of size n, it is much simpler to leave out lower-order terms such as 4n and 3 (which become insignificant as n grows). Even the detail of the coefficient 5 in the leading term computers will be five times faster in a few years anyway.
46 Running time of Algorithms Instead of reporting that an algorithm takes, say, 5n 3 + 4n + 3 steps on an input of size n, it is much simpler to leave out lower-order terms such as 4n and 3 (which become insignificant as n grows). Even the detail of the coefficient 5 in the leading term computers will be five times faster in a few years anyway. We just say that the algorithm takes time O(n 3 ) (pronounced big oh of n 3 ).
47 Running time of Algorithms Instead of reporting that an algorithm takes, say, 5n 3 + 4n + 3 steps on an input of size n, it is much simpler to leave out lower-order terms such as 4n and 3 (which become insignificant as n grows). Even the detail of the coefficient 5 in the leading term computers will be five times faster in a few years anyway. We just say that the algorithm takes time O(n 3 ) (pronounced big oh of n 3 ). We define this notation precisely by thinking of f (n) and g(n) as the running times of two algorithms on inputs of size n.
48 Definition Let f (n) and g(n) be functions from positive integers to positive reals. We say f = O(g) (which means that f grows no faster than g ) if there is a constant c > 0 such that f (n) c.g(n).
49
50 Saying f = O(g) is a very loose analog of f g.
51 Saying f = O(g) is a very loose analog of f g. It differs from the usual notion of due to the constant c.
52 Saying f = O(g) is a very loose analog of f g. It differs from the usual notion of due to the constant c. This constant also allows us to disregard what happens for small values of n.
53 Saying f = O(g) is a very loose analog of f g. It differs from the usual notion of due to the constant c. This constant also allows us to disregard what happens for small values of n. Suppose f 1 (n) = n 2 and f 2 (n) = 2n Which is better (smaller)?
54 Saying f = O(g) is a very loose analog of f g. It differs from the usual notion of due to the constant c. This constant also allows us to disregard what happens for small values of n. Suppose f 1 (n) = n 2 and f 2 (n) = 2n Which is better (smaller)? Well, this depends on the value of n. For n 5, f 1 is smaller; thereafter, f 2 is the clear winner.
55 Saying f = O(g) is a very loose analog of f g. It differs from the usual notion of due to the constant c. This constant also allows us to disregard what happens for small values of n. Suppose f 1 (n) = n 2 and f 2 (n) = 2n Which is better (smaller)? Well, this depends on the value of n. For n 5, f 1 is smaller; thereafter, f 2 is the clear winner. In this case, f 2 scales much better as n grows, and therefore it is smaller.
56
57 This superiority is captured by the big-o notation: f 2 = O(f 1 ), because
58 This superiority is captured by the big-o notation: f 2 = O(f 1 ), because for all n. f 2 (n) 2n + 20 = f 1 (n) n 2 22
59 This superiority is captured by the big-o notation: f 2 = O(f 1 ), because for all n. f 2 (n) 2n + 20 = f 1 (n) n 2 22 On the other hand, f 1 O(f 2 ), since the ratio
60 This superiority is captured by the big-o notation: f 2 = O(f 1 ), because for all n. f 2 (n) 2n + 20 = f 1 (n) n 2 22 On the other hand, f 1 O(f 2 ), since the ratio f 1 (n) f 2 (n) = n2 2n + 20
61 This superiority is captured by the big-o notation: f 2 = O(f 1 ), because for all n. f 2 (n) 2n + 20 = f 1 (n) n 2 22 On the other hand, f 1 O(f 2 ), since the ratio f 1 (n) f 2 (n) = n2 2n + 20 can get arbitrarily large, and so no constant c will make the definition work.
62
63 2n + 20 = O(n 2 )
64
65 Suppose f 1 (n) = n 2, f 2 (n) = 2n + 20, and f 3 (n) = n + 1?
66 Suppose f 1 (n) = n 2, f 2 (n) = 2n + 20, and f 3 (n) = n + 1? We see that f 2 = O(f 3 ), because
67 Suppose f 1 (n) = n 2, f 2 (n) = 2n + 20, and f 3 (n) = n + 1? We see that f 2 = O(f 3 ), because f 2 (n) 2n + 20 = f 3 (n) n ,
68 Suppose f 1 (n) = n 2, f 2 (n) = 2n + 20, and f 3 (n) = n + 1? We see that f 2 = O(f 3 ), because f 2 (n) 2n + 20 = f 3 (n) n , but also f 3 = O(f 2 ), this time with c = 1.
69 Suppose f 1 (n) = n 2, f 2 (n) = 2n + 20, and f 3 (n) = n + 1? We see that f 2 = O(f 3 ), because f 2 (n) 2n + 20 = f 3 (n) n , but also f 3 = O(f 2 ), this time with c = 1. Just as O(.) is an analog of we can also define analogs of and = as follows.
70
71 Definition Let f (n) and g(n) be functions from positive integers to positive reals. f = Ω(g) means g = O(f ).
72 Definition Let f (n) and g(n) be functions from positive integers to positive reals. f = Ω(g) means g = O(f ). Definition Let f (n) and g(n) be functions from positive integers to positive reals. f = Θ(g) means f = O(g) and f = Ω(g).
73 Definition Let f (n) and g(n) be functions from positive integers to positive reals. f = Ω(g) means g = O(f ). Definition Let f (n) and g(n) be functions from positive integers to positive reals. f = Θ(g) means f = O(g) and f = Ω(g). Example 2n + 20 = Θ(n + 1) and n 2 = Ω(n + 1).
74 Here are some commonsense rules that help simplify functions by omitting dominated terms:
75 Here are some commonsense rules that help simplify functions by omitting dominated terms: Multiplicative constants can be omitted: 14n 2 becomes n 2.
76 Here are some commonsense rules that help simplify functions by omitting dominated terms: Multiplicative constants can be omitted: 14n 2 becomes n 2. n a dominates n b if a > b: for instance, n 2 dominates n.
77 Here are some commonsense rules that help simplify functions by omitting dominated terms: Multiplicative constants can be omitted: 14n 2 becomes n 2. n a dominates n b if a > b: for instance, n 2 dominates n. Any exponential dominates any polynomial: 3 n dominates n 5.
78 Here are some commonsense rules that help simplify functions by omitting dominated terms: Multiplicative constants can be omitted: 14n 2 becomes n 2. n a dominates n b if a > b: for instance, n 2 dominates n. Any exponential dominates any polynomial: 3 n dominates n 5. Likewise, any polynomial dominates any logarithm: n dominates (log n) 3. This also means, for example, that n 2 dominates n log n.
79 Parallel Algorithms: What? Devising algorithms which allow many processors to work collectively to solve:
80 Parallel Algorithms: What? Devising algorithms which allow many processors to work collectively to solve: the same problems, but faster
81 Parallel Algorithms: What? Devising algorithms which allow many processors to work collectively to solve: the same problems, but faster bigger/more refined problems in the same time
82 Parallel Algorithms: What? Devising algorithms which allow many processors to work collectively to solve: the same problems, but faster bigger/more refined problems in the same time when compared to a single processor.
83 Parallel Algorithms: Why? Because it is an interesting intellectual challenge!
84 Parallel Algorithms: Why? Because it is an interesting intellectual challenge! Because parallelism is everywhere and we need algorithms to exploit it.
85 Parallel Algorithms: Why? Because it is an interesting intellectual challenge! Because parallelism is everywhere and we need algorithms to exploit it. Global scale: computational grids
86 Parallel Algorithms: Why? Because it is an interesting intellectual challenge! Because parallelism is everywhere and we need algorithms to exploit it. Global scale: computational grids Supercomputer scale: Top 500 HPC, scientific simulation, financial modeling, Google,
87 Parallel Algorithms: Why? Because it is an interesting intellectual challenge! Because parallelism is everywhere and we need algorithms to exploit it. Global scale: computational grids Supercomputer scale: Top 500 HPC, scientific simulation, financial modeling, Google, Desktop scale: commodity multicore PCs and laptops
88 Parallel Algorithms: Why? Because it is an interesting intellectual challenge! Because parallelism is everywhere and we need algorithms to exploit it. Global scale: computational grids Supercomputer scale: Top 500 HPC, scientific simulation, financial modeling, Google, Desktop scale: commodity multicore PCs and laptops Specialised hardware: custom parallel circuits for key operations such as encryption, and multimedia (NVIDIA)
89 Parallel Algorithms: How? We will need
90 Parallel Algorithms: How? We will need machine model(s) which tell us what the basic operations are in a reasonably abstract way
91 Parallel Algorithms: How? We will need machine model(s) which tell us what the basic operations are in a reasonably abstract way cost model(s) which tell us what these operations cost, in terms of resources we care about (usually time, sometimes memory)
92 Parallel Algorithms: How? We will need machine model(s) which tell us what the basic operations are in a reasonably abstract way cost model(s) which tell us what these operations cost, in terms of resources we care about (usually time, sometimes memory) analysis techniques which help us map from algorithms to costs with acceptable accuracy
93 Parallel Algorithms: How? We will need machine model(s) which tell us what the basic operations are in a reasonably abstract way cost model(s) which tell us what these operations cost, in terms of resources we care about (usually time, sometimes memory) analysis techniques which help us map from algorithms to costs with acceptable accuracy metrics which let us discriminate between costs (e.g. speed v. efficiency)
94 Parallel Computer Structures Dominant programming models reflect an underlying architectural divergence:
95 Parallel Computer Structures Dominant programming models reflect an underlying architectural divergence: the shared address space model allows threads (or lightweight processes) to interact directly through common memory locations. Care is required to avoid unintended interactions (races).
96 Parallel Computer Structures Dominant programming models reflect an underlying architectural divergence: the shared address space model allows threads (or lightweight processes) to interact directly through common memory locations. Care is required to avoid unintended interactions (races). We consider two simplified models multi-threading and PRAM
97 Parallel Computer Structures Dominant programming models reflect an underlying architectural divergence: the shared address space model allows threads (or lightweight processes) to interact directly through common memory locations. Care is required to avoid unintended interactions (races). We consider two simplified models multi-threading and PRAM the message passing model gives each process its own address space. Care is required to distribute the data across these address spaces and to communicate results between them by sending and receiving messages as appropriate.
98 Parallel Computer Structures Dominant programming models reflect an underlying architectural divergence: the shared address space model allows threads (or lightweight processes) to interact directly through common memory locations. Care is required to avoid unintended interactions (races). We consider two simplified models multi-threading and PRAM the message passing model gives each process its own address space. Care is required to distribute the data across these address spaces and to communicate results between them by sending and receiving messages as appropriate. We consider a simplified model, the graph interconnection network
99 Multi-threading model High level model of thread processes using spawn and sync. Does not consider the underlying hardware. Algorithm Algorithm-A begin { } spawn Algorithm-B do Algorithm-B in parallel with this code { other stuff } sync wait here for all previous spawned parallel computations to complete { } end
100 Multi-threading model Many languages (e.g. Java) support the production of separately runnable processes called threads. Each thread looks like it is running on its own and the operating system shares time and processors between the threads. In the multi-threading model, the exact parallel implementation is left to the operating system
101 PRAM model The processors act synchronously SIMD (single instruction multiple data) Several read-write possibilities (exclusive-concurrent) Any mix of ER, EW, CR, CW, e.g. EREW EREW algorithms can be very different from CRCW
102 Interconnection network Graph G = (V, E) Each node i in V is a processor P i Each edge (i, j) in E is a two way link between P i and P j Each processor has its own memory. P X is the value of variable X at node (processor) P Synchronous model (SIMD) P i and P j communicate directly only if joined by an edge (i, j)
103 Concluding remarks and examples Sequential or parallel? Some tasks are intrinsically sequential: e.g. taking a train from London to Manchester Some problems have parts which can be done in parallel: e.g. building the walls of a house Algorithms which split the problem into sub-problems (divide-and-conquer) can work in parallel.
104 Concluding remarks and examples Sequential or parallel? Some tasks are intrinsically sequential: e.g. taking a train from London to Manchester Some problems have parts which can be done in parallel: e.g. building the walls of a house Algorithms which split the problem into sub-problems (divide-and-conquer) can work in parallel. Parallel or Distributed? In both cases many processors run the same program. A parallel system has a central controller. All processors execute the same step of the program at the same time A distributed system has no central control. Processors cooperate to obtain a well regulated system
105 Example of a parallel program #download R from #download R Studio from #install.packages("parallel") intstall parallel package library(parallel) # load R parallel package num_cores=detectcores() # cores on your m/c num_cores #view answer detectcores(logical = FALSE) # how many cores is it really? # Initiate cluster cl <- makecluster(num_cores) #Creates a set of copies of R running in parallel #and communicating over sockets. #apply function over a list in parallel z=parlapply(cl, 1: ,function(x) x^2) stopcluster(cl) #clean up
106 Task manager snapshot
107 Rainfall prediction An international team led by Takemasa Miyoshi of the RIKEN Advanced Center for Computational Science (AICS) has used the powerful K computer and advanced radar observational data to accurately predict the occurrence of torrential rains in localized areas. Today, supercomputer-based weather predictions are typically done with simulations that use grids spaced at least one kilometer apart, and incorporate new observational data every hour. However, due to the roughness of the calculations, these simulations cannot accurately predict the threat of torrential rains, which can develop within minutes when cumulonimbus clouds suddenly develop. Now, an international team led by Takemasa Miyoshi of the RIKEN Advanced Center for Computational Science (AICS) has used the powerful K computer and advanced radar observational data to accurately predict the occurrence of torrential rains in localized areas. The key to the current work, to be published later this month in the August issue of the Bulletin of the American Meteorological Society, is big data assimilation using computational power to synchronize data between large-scale computer simulations and observational data. Using the K computer, the researchers carried out 100 parallel simulations of a convective weather system, using the nonhydrostatic mesoscale model used by the Japan Meteorological Agency, but with 100-meter grid spacing rather than the typical 2-kilometer or 5-kilometer spacing, and assimilated data from a next-generation phased array weather radar, which was launched in the summer of 2012 by the National Institute of Information and Communications Technology (NICT) and Osaka University. With this, they produced a high-resolution three-dimensional distribution map of rain every 30 seconds, 120 times more rapidly than the typical hourly updated systems operated at the world s weather prediction centers today. To test the accuracy of the system, the researchers attempted to model a real case a sudden storm that took place on July 13, 2013 in Kyoto, close enough to Osaka that it was caught by the radars at Osaka University. The simulations were run starting at 15:00 Japanese time, and were tested as pure simulations without observational data input as well as with the incorporation of data every 30 seconds, on 100-meter and 1-kilometer grid scales. The simulation alone was unable to replicate the rain, while the incorporation of observational data allowed the computer to represent the actual storm. In particular, the simulation done with 100-meter grids led to a very accurate replication of the storm compared to actual observations. According to Miyoshi, Supercomputers are becoming more and more powerful, and are allowing us to incorporate ever more advanced data into simulations. Our study shows that in the future, it will be possible to use weather forecasting to predict severe local weather phenomena such as torrential rains, a growing problem which can cause enormous damage and cost lives.
108
109 Thats the end of the introduction!
110 Bibliography I The materials for this lecture were taken and partially adapted from: Algorithms by S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani. McGraw-Hill Introduction to Parallel Computing by Ananth Grama, George Karypis, Vipin Kumar, and Anshul Gupta. Pearson Design and Analysis of Parallel Algorithms Course Slides by Murray Cole, School of Informatics, University of Edinburgh. CLRS, Introduction to Algorithms Chapter 27 (downloaded) Various articles from WWW
Algorithms (I) Yijia Chen Shanghai Jiaotong University
Algorithms (I) Yijia Chen Shanghai Jiaotong University Instructor and teaching assistants Instructor and teaching assistants Yijia Chen Homepage: http://basics.sjtu.edu.cn/~chen Email: yijia.chen@cs.sjtu.edu.cn
More informationChapter 0. Prologue. Algorithms (I) Johann Gutenberg. Two ideas changed the world. Decimal system. Al Khwarizmi
Algorithms (I) Yijia Chen Shanghai Jiaotong University Chapter 0. Prologue Johann Gutenberg Two ideas changed the world Because of the typography, literacy spread, the Dark Ages ended, the human intellect
More informationLecture 1 - Preliminaries
Advanced Algorithms Floriano Zini Free University of Bozen-Bolzano Faculty of Computer Science Academic Year 2013-2014 Lecture 1 - Preliminaries 1 Typography vs algorithms Johann Gutenberg (c. 1398 February
More informationMulti-threading model
Multi-threading model High level model of thread processes using spawn and sync. Does not consider the underlying hardware. Algorithm Algorithm-A begin { } spawn Algorithm-B do Algorithm-B in parallel
More informationInput Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary
Complexity Analysis Complexity Theory Input Decidable Language -- Program Halts on all Input Encoding of Input -- Natural Numbers Encoded in Binary or Decimal, Not Unary Output TRUE or FALSE Time and Space
More informationCSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 1: Introduction
CSE 101 Algorithm Design and Analysis Miles Jones mej016@eng.ucsd.edu Office 4208 CSE Building Lecture 1: Introduction LOGISTICS Book: Algorithms by Dasgupta, Papadimitriou and Vazirani Homework: Due Wednesdays
More informationAlgorithms. Copyright c 2006 S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani
Algorithms Copyright c 2006 S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani July 18, 2006 2 Algorithms Contents Preface 9 0 Prologue 11 0.1 Books and algorithms...................................
More informationReading 10 : Asymptotic Analysis
CS/Math 240: Introduction to Discrete Mathematics Fall 201 Instructor: Beck Hasti and Gautam Prakriya Reading 10 : Asymptotic Analysis In the last reading, we analyzed the running times of various algorithms.
More informationGrowth of Functions (CLRS 2.3,3)
Growth of Functions (CLRS 2.3,3) 1 Review Last time we discussed running time of algorithms and introduced the RAM model of computation. Best-case running time: the shortest running time for any input
More informationCS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms
CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline CS 4407, Algorithms Growth Functions
More informationAlgorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang
Algorithm Analysis, Asymptotic notations CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Last class Introduction to algorithm analysis: fibonacci seq calculation counting number of computer steps recursive
More informationCS173 Running Time and Big-O. Tandy Warnow
CS173 Running Time and Big-O Tandy Warnow CS 173 Running Times and Big-O analysis Tandy Warnow Today s material We will cover: Running time analysis Review of running time analysis of Bubblesort Review
More informationLecture 22: Multithreaded Algorithms CSCI Algorithms I. Andrew Rosenberg
Lecture 22: Multithreaded Algorithms CSCI 700 - Algorithms I Andrew Rosenberg Last Time Open Addressing Hashing Today Multithreading Two Styles of Threading Shared Memory Every thread can access the same
More informationAnalysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College
Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College Why analysis? We want to predict how the algorithm will behave (e.g. running time) on arbitrary inputs, and how it will
More informationTaking Stock. IE170: Algorithms in Systems Engineering: Lecture 3. Θ Notation. Comparing Algorithms
Taking Stock IE170: Algorithms in Systems Engineering: Lecture 3 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University January 19, 2007 Last Time Lots of funky math Playing
More informationComputer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2
Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2 Outline Introduction to algorithm analysis: fibonacci seq calculation counting number of computer steps recursive formula
More informationComputer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2
Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2 Outline Introduction to algorithm analysis: fibonacci seq calculation counting number of computer steps recursive formula
More informationCS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:
CS 344 Design and Analysis of Algorithms Tarek El-Gaaly tgaaly@cs.rutgers.edu Course website: www.cs.rutgers.edu/~tgaaly/cs344.html Course Outline Textbook: Algorithms by S. Dasgupta, C.H. Papadimitriou,
More informationCMPSCI611: Three Divide-and-Conquer Examples Lecture 2
CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives
More informationDefine Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo
CSE 417: Algorithms and Computational 2: Analysis Winter 2007 Larry Ruzzo Define Efficiency Runs fast on typical real problem instances Pro: sensible, bottom-line-oriented Con: moving target (diff computers,
More informationCS483 Design and Analysis of Algorithms
CS483 Design and Analysis of Algorithms Lecture 1 Introduction and Prologue Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: Room 5326, Engineering Building, Thursday 4:30pm - 6:30pm
More informationEE/CSCI 451: Parallel and Distributed Computation
EE/CSCI 451: Parallel and Distributed Computation Lecture #19 3/28/2017 Xuehai Qian Xuehai.qian@usc.edu http://alchem.usc.edu/portal/xuehaiq.html University of Southern California 1 From last class PRAM
More informationTopic 17. Analysis of Algorithms
Topic 17 Analysis of Algorithms Analysis of Algorithms- Review Efficiency of an algorithm can be measured in terms of : Time complexity: a measure of the amount of time required to execute an algorithm
More informationLecture 27: Theory of Computation. Marvin Zhang 08/08/2016
Lecture 27: Theory of Computation Marvin Zhang 08/08/2016 Announcements Roadmap Introduction Functions Data Mutability Objects This week (Applications), the goals are: To go beyond CS 61A and see examples
More informationWhen we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:
CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe
More informationCOMPUTER ALGORITHMS. Athasit Surarerks.
COMPUTER ALGORITHMS Athasit Surarerks. Introduction EUCLID s GAME Two players move in turn. On each move, a player has to write on the board a positive integer equal to the different from two numbers already
More informationCS 4407 Algorithms Lecture 2: Growth Functions
CS 4407 Algorithms Lecture 2: Growth Functions Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline Growth Functions Mathematical specification of growth functions
More informationDefining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo
CSE 421: Intro Algorithms 2: Analysis Summer 2007 Larry Ruzzo Defining Efficiency Runs fast on typical real problem instances Pro: sensible, bottom-line-oriented Con: moving target (diff computers, compilers,
More informationTopic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number
Topic Contents Factoring Methods Unit 3 The smallest divisor of an integer The GCD of two numbers Generating prime numbers Computing prime factors of an integer Generating pseudo random numbers Raising
More informationAnalytical Modeling of Parallel Systems
Analytical Modeling of Parallel Systems Chieh-Sen (Jason) Huang Department of Applied Mathematics National Sun Yat-sen University Thank Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar for providing
More informationModule 1: Analyzing the Efficiency of Algorithms
Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Based
More informationP, NP, NP-Complete, and NPhard
P, NP, NP-Complete, and NPhard Problems Zhenjiang Li 21/09/2011 Outline Algorithm time complicity P and NP problems NP-Complete and NP-Hard problems Algorithm time complicity Outline What is this course
More informationLecture 2. Fundamentals of the Analysis of Algorithm Efficiency
Lecture 2 Fundamentals of the Analysis of Algorithm Efficiency 1 Lecture Contents 1. Analysis Framework 2. Asymptotic Notations and Basic Efficiency Classes 3. Mathematical Analysis of Nonrecursive Algorithms
More informationLecture 3. Big-O notation, more recurrences!!
Lecture 3 Big-O notation, more recurrences!! Announcements! HW1 is posted! (Due Friday) See Piazza for a list of HW clarifications First recitation section was this morning, there s another tomorrow (same
More informationComputational Complexity
Computational Complexity S. V. N. Vishwanathan, Pinar Yanardag January 8, 016 1 Computational Complexity: What, Why, and How? Intuitively an algorithm is a well defined computational procedure that takes
More informationIntroduction to Algorithms
Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that
More informationCSC2100B Data Structures Analysis
CSC2100B Data Structures Analysis Irwin King king@cse.cuhk.edu.hk http://www.cse.cuhk.edu.hk/~king Department of Computer Science & Engineering The Chinese University of Hong Kong Algorithm An algorithm
More informationPerformance and Scalability. Lars Karlsson
Performance and Scalability Lars Karlsson Outline Complexity analysis Runtime, speedup, efficiency Amdahl s Law and scalability Cost and overhead Cost optimality Iso-efficiency function Case study: matrix
More informationChapter 2: The Basics. slides 2017, David Doty ECS 220: Theory of Computation based on The Nature of Computation by Moore and Mertens
Chapter 2: The Basics slides 2017, David Doty ECS 220: Theory of Computation based on The Nature of Computation by Moore and Mertens Problem instances vs. decision problems vs. search problems Decision
More informationFall Lecture 5
15-150 Fall 2018 Lecture 5 Today Work sequential runtime Recurrences exact and approximate solutions Improving efficiency program recurrence work asymptotic Want the runtime of evaluating f(n), for large
More informationCSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018
CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2018 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis
More informationComputer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms
Computer Science 385 Analysis of Algorithms Siena College Spring 2011 Topic Notes: Limitations of Algorithms We conclude with a discussion of the limitations of the power of algorithms. That is, what kinds
More informationCSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018
CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2018 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis
More information3.1 Asymptotic notation
3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N = {0, 1, 2,... Such
More informationLecture 17: Analytical Modeling of Parallel Programs: Scalability CSCE 569 Parallel Computing
Lecture 17: Analytical Modeling of Parallel Programs: Scalability CSCE 569 Parallel Computing Department of Computer Science and Engineering Yonghong Yan yanyh@cse.sc.edu http://cse.sc.edu/~yanyh 1 Topic
More informationAnnouncements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010
Announcements CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Dan Grossman Spring 2010 Project 1 posted Section materials on using Eclipse will be very useful if you have never used
More informationCh01. Analysis of Algorithms
Ch01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T. Goodrich
More informationCSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019
CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2019 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis
More informationReminder of Asymptotic Notation. Inf 2B: Asymptotic notation and Algorithms. Asymptotic notation for Running-time
1 / 18 Reminder of Asymptotic Notation / 18 Inf B: Asymptotic notation and Algorithms Lecture B of ADS thread Let f, g : N! R be functions. We say that: I f is O(g) if there is some n 0 N and some c >
More informationCSE 373: Data Structures and Algorithms. Asymptotic Analysis. Autumn Shrirang (Shri) Mare
CSE 373: Data Structures and Algorithms Asymptotic Analysis Autumn 2018 Shrirang (Shri) Mare shri@cs.washington.edu Thanks to Kasey Champion, Ben Jones, Adam Blank, Michael Lee, Evan McCarty, Robbie Weber,
More informationCh 01. Analysis of Algorithms
Ch 01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T.
More informationModule 1: Analyzing the Efficiency of Algorithms
Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?
More informationAlgorithms: COMP3121/3821/9101/9801
Algorithms: COMP311/381/9101/9801 Aleks Ignjatović, ignjat@cse.unsw.edu.au office: 504 (CSE building); phone: 5-6659 Course Admin: Amin Malekpour, a.malekpour@unsw.edu.au School of Computer Science and
More informationCOMP 382: Reasoning about algorithms
Fall 2014 Unit 4: Basics of complexity analysis Correctness and efficiency So far, we have talked about correctness and termination of algorithms What about efficiency? Running time of an algorithm For
More informationCSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010
CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.
More informationWhen we use asymptotic notation within an expression, the asymptotic notation is shorthand for an unspecified function satisfying the relation:
CS 124 Section #1 Big-Oh, the Master Theorem, and MergeSort 1/29/2018 1 Big-Oh Notation 1.1 Definition Big-Oh notation is a way to describe the rate of growth of functions. In CS, we use it to describe
More informationHandouts. CS701 Theory of Computation
Handouts CS701 Theory of Computation by Kashif Nadeem VU Student MS Computer Science LECTURE 01 Overview In this lecturer the topics will be discussed including The Story of Computation, Theory of Computation,
More informationAlgorithms. Copyright c 2006 S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani
Algorithms Copyright c 2006 S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani July 18, 2006 2 Algorithms Contents Preface 9 0 Prologue 11 0.1 Books and algorithms...................................
More informationBig , and Definition Definition
Big O, Ω, and Θ Big-O gives us only a one-way comparison; if f is O(g) then g eventually is bigger than f from that point on, but in fact f could be very small in comparison. Example; 3n is O(2 2n ). We
More informationIntroduction. An Introduction to Algorithms and Data Structures
Introduction An Introduction to Algorithms and Data Structures Overview Aims This course is an introduction to the design, analysis and wide variety of algorithms (a topic often called Algorithmics ).
More informationLecture 1: Asymptotic Complexity. 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole.
Lecture 1: Asymptotic Complexity 1 These slides include material originally prepared by Dr.Ron Cytron, Dr. Jeremy Buhler, and Dr. Steve Cole. Announcements TA office hours officially start this week see
More informationAsymptotic Analysis 1
Asymptotic Analysis 1 Last week, we discussed how to present algorithms using pseudocode. For example, we looked at an algorithm for singing the annoying song 99 Bottles of Beer on the Wall for arbitrary
More informationAsymptotic Analysis Cont'd
Cont'd Carlos Moreno cmoreno @ uwaterloo.ca EIT-4103 https://ece.uwaterloo.ca/~cmoreno/ece250 Announcements We have class this Wednesday, Jan 18 at 12:30 That is, we have two sessions this Wednesday: at
More informationSolving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.
Solving recurrences Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms Example: Merge-Sort(A, p, r) 1: if p < r then 2: q (p + r)/2 3: Merge-Sort(A,
More informationAlgorithms and Data S tructures Structures Complexity Complexit of Algorithms Ulf Leser
Algorithms and Data Structures Complexity of Algorithms Ulf Leser Content of this Lecture Efficiency of Algorithms Machine Model Complexity Examples Multiplication of two binary numbers (unit cost?) Exact
More informationwhere Q is a finite set of states
Space Complexity So far most of our theoretical investigation on the performances of the various algorithms considered has focused on time. Another important dynamic complexity measure that can be associated
More informationCS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms
CS 4407 Algorithms Lecture 3: Iterative and Divide and Conquer Algorithms Prof. Gregory Provan Department of Computer Science University College Cork 1 Lecture Outline CS 4407, Algorithms Growth Functions
More informationAsymptotic Analysis. Thomas A. Anastasio. January 7, 2004
Asymptotic Analysis Thomas A. Anastasio January 7, 004 1 Introduction As a programmer, you often have a choice of data structures and algorithms. Choosing the best one for a particular job involves, among
More informationMathematical Background. Unsigned binary numbers. Powers of 2. Logs and exponents. Mathematical Background. Today, we will review:
Mathematical Background Mathematical Background CSE 373 Data Structures Today, we will review: Logs and eponents Series Recursion Motivation for Algorithm Analysis 5 January 007 CSE 373 - Math Background
More informationCSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015 Today Registration should be done. Homework 1 due 11:59pm next Wednesday, April 8 th. Review math
More informationNP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness
Lecture 19 NP-Completeness I 19.1 Overview In the past few lectures we have looked at increasingly more expressive problems that we were able to solve using efficient algorithms. In this lecture we introduce
More informationPhysics is about finding the simplest and least complicated explanation for things.
WHAT IS PHYSICS Physics is about finding the simplest and least complicated explanation for things. It is about observing how things work and finding the connections between cause and effect that explain
More informationMATH 22 FUNCTIONS: ORDER OF GROWTH. Lecture O: 10/21/2003. The old order changeth, yielding place to new. Tennyson, Idylls of the King
MATH 22 Lecture O: 10/21/2003 FUNCTIONS: ORDER OF GROWTH The old order changeth, yielding place to new. Tennyson, Idylls of the King Men are but children of a larger growth. Dryden, All for Love, Act 4,
More informationFundamental Algorithms
Fundamental Algorithms Chapter 1: Introduction Michael Bader Winter 2011/12 Chapter 1: Introduction, Winter 2011/12 1 Part I Overview Chapter 1: Introduction, Winter 2011/12 2 Organizational Stuff 2 SWS
More informationAdvanced Algorithmics (6EAP)
Advanced Algorithmics (6EAP) MTAT.03.238 Order of growth maths Jaak Vilo 2017 fall Jaak Vilo 1 Program execution on input of size n How many steps/cycles a processor would need to do How to relate algorithm
More informationCpt S 223. School of EECS, WSU
Algorithm Analysis 1 Purpose Why bother analyzing code; isn t getting it to work enough? Estimate time and memory in the average case and worst case Identify bottlenecks, i.e., where to reduce time Compare
More information2.2 Asymptotic Order of Growth. definitions and notation (2.2) examples (2.4) properties (2.2)
2.2 Asymptotic Order of Growth definitions and notation (2.2) examples (2.4) properties (2.2) Asymptotic Order of Growth Upper bounds. T(n) is O(f(n)) if there exist constants c > 0 and n 0 0 such that
More informationAlgorithm efficiency analysis
Algorithm efficiency analysis Mădălina Răschip, Cristian Gaţu Faculty of Computer Science Alexandru Ioan Cuza University of Iaşi, Romania DS 2017/2018 Content Algorithm efficiency analysis Recursive function
More informationGreat Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity
15-251 Great Theoretical Ideas in Computer Science Lecture 9: Introduction to Computational Complexity February 14th, 2017 Poll What is the running time of this algorithm? Choose the tightest bound. def
More information1 Reductions and Expressiveness
15-451/651: Design & Analysis of Algorithms November 3, 2015 Lecture #17 last changed: October 30, 2015 In the past few lectures we have looked at increasingly more expressive problems solvable using efficient
More informationINF2270 Spring Philipp Häfliger. Lecture 8: Superscalar CPUs, Course Summary/Repetition (1/2)
INF2270 Spring 2010 Philipp Häfliger Summary/Repetition (1/2) content From Scalar to Superscalar Lecture Summary and Brief Repetition Binary numbers Boolean Algebra Combinational Logic Circuits Encoder/Decoder
More informationRemainders. We learned how to multiply and divide in elementary
Remainders We learned how to multiply and divide in elementary school. As adults we perform division mostly by pressing the key on a calculator. This key supplies the quotient. In numerical analysis and
More informationCopyright 2000, Kevin Wayne 1
Chapter 2 2.1 Computational Tractability Basics of Algorithm Analysis "For me, great algorithms are the poetry of computation. Just like verse, they can be terse, allusive, dense, and even mysterious.
More informationOptimization Techniques for Parallel Code 1. Parallel programming models
Optimization Techniques for Parallel Code 1. Parallel programming models Sylvain Collange Inria Rennes Bretagne Atlantique http://www.irisa.fr/alf/collange/ sylvain.collange@inria.fr OPT - 2017 Goals of
More informationHYCOM and Navy ESPC Future High Performance Computing Needs. Alan J. Wallcraft. COAPS Short Seminar November 6, 2017
HYCOM and Navy ESPC Future High Performance Computing Needs Alan J. Wallcraft COAPS Short Seminar November 6, 2017 Forecasting Architectural Trends 3 NAVY OPERATIONAL GLOBAL OCEAN PREDICTION Trend is higher
More informationSolving Recurrences. Lecture 23 CS2110 Fall 2011
Solving Recurrences Lecture 23 CS2110 Fall 2011 1 Announcements Makeup Prelim 2 Monday 11/21 7:30-9pm Upson 5130 Please do not discuss the prelim with your classmates! Quiz 4 next Tuesday in class Topics:
More informationB629 project - StreamIt MPI Backend. Nilesh Mahajan
B629 project - StreamIt MPI Backend Nilesh Mahajan March 26, 2013 Abstract StreamIt is a language based on the dataflow model of computation. StreamIt consists of computation units called filters connected
More informationProgramming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras
Programming, Data Structures and Algorithms Prof. Hema Murthy Department of Computer Science and Engineering Indian Institute Technology, Madras Module - 2 Lecture - 25 Measuring running time of a program
More informationR ij = 2. Using all of these facts together, you can solve problem number 9.
Help for Homework Problem #9 Let G(V,E) be any undirected graph We want to calculate the travel time across the graph. Think of each edge as one resistor of 1 Ohm. Say we have two nodes: i and j Let the
More informationSymbolic Logic Outline
Symbolic Logic Outline 1. Symbolic Logic Outline 2. What is Logic? 3. How Do We Use Logic? 4. Logical Inferences #1 5. Logical Inferences #2 6. Symbolic Logic #1 7. Symbolic Logic #2 8. What If a Premise
More informationINTENSIVE COMPUTATION. Annalisa Massini
INTENSIVE COMPUTATION Annalisa Massini 2015-2016 Course topics The course will cover topics that are in some sense related to intensive computation: Matlab (an introduction) GPU (an introduction) Sparse
More informationCIS 121. Analysis of Algorithms & Computational Complexity. Slides based on materials provided by Mary Wootters (Stanford University)
CIS 121 Analysis of Algorithms & Computational Complexity Slides based on materials provided by Mary Wootters (Stanford University) Today Sorting: InsertionSort vs MergeSort Analyzing the correctness of
More informationAnalysis of Algorithms
Presentation for use with the textbook Data Structures and Algorithms in Java, 6th edition, by M. T. Goodrich, R. Tamassia, and M. H. Goldwasser, Wiley, 2014 Analysis of Algorithms Input Algorithm Analysis
More informationCSE 548: (Design and) Analysis of Algorithms
Administrative Ex. Problems Big-O and big-ω Proofs 1 / 28 CSE 548: (Design and) Analysis of Algorithms Fall 2017 R. Sekar Administrative Ex. Problems Big-O and big-ω Proofs Topics 1. Administrative 2.
More informationMechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras
Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras Lecture 05 The Fundamental Forces of Nature In this lecture, we will discuss the
More informationOrder Notation and the Mathematics for Analysis of Algorithms
Elementary Data Structures and Algorithms Order Notation and the Mathematics for Analysis of Algorithms Name: Email: Code: 19704 Always choose the best or most general answer, unless otherwise instructed.
More informationMarwan Burelle. Parallel and Concurrent Programming. Introduction and Foundation
and and marwan.burelle@lse.epita.fr http://wiki-prog.kh405.net Outline 1 2 and 3 and Evolutions and Next evolutions in processor tends more on more on growing of cores number GPU and similar extensions
More informationAlgorithms Design & Analysis. Analysis of Algorithm
Algorithms Design & Analysis Analysis of Algorithm Review Internship Stable Matching Algorithm 2 Outline Time complexity Computation model Asymptotic notions Recurrence Master theorem 3 The problem of
More informationAlgorithms and Theory of Computation. Lecture 2: Big-O Notation Graph Algorithms
Algorithms and Theory of Computation Lecture 2: Big-O Notation Graph Algorithms Xiaohui Bei MAS 714 August 14, 2018 Nanyang Technological University MAS 714 August 14, 2018 1 / 20 O, Ω, and Θ Nanyang Technological
More informationMat Week 6. Fall Mat Week 6. Algorithms. Properties. Examples. Searching. Sorting. Time Complexity. Example. Properties.
Fall 2013 Student Responsibilities Reading: Textbook, Section 3.1 3.2 Assignments: 1. for sections 3.1 and 3.2 2. Worksheet #4 on Execution s 3. Worksheet #5 on Growth Rates Attendance: Strongly Encouraged
More information