R ij = 2. Using all of these facts together, you can solve problem number 9.

Similar documents
Copyright 2000, Kevin Wayne 1

6.842 Randomness and Computation April 2, Lecture 14

PRAMs. M 1 M 2 M p. globaler Speicher

Lecture 6 September 21, 2016

Definition: Alternating time and space Game Semantics: State of machine determines who

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Graph-theoretic Problems

Two Fast Parallel GCD Algorithms of Many Integers. Sidi Mohamed SEDJELMACI

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Limitations of Algorithm Power

EE/CSCI 451: Parallel and Distributed Computation

Complexity: Some examples

IS 709/809: Computational Methods in IS Research Fall Exam Review

(K2) For sum of the potential differences around any cycle is equal to zero. r uv resistance of {u, v}. [In other words, V ir.]

CS 344 Design and Analysis of Algorithms. Tarek El-Gaaly Course website:

Parallel Prefix Algorithms 1. A Secret to turning serial into parallel

Definition: Alternating time and space Game Semantics: State of machine determines who

Data Structures and Algorithm Analysis (CSC317) Randomized Algorithms (part 3)

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

P, NP, NP-Complete, and NPhard

CS2223 Algorithms D Term 2009 Exam 3 Solutions

CSE 613: Parallel Programming. Lecture 9 ( Divide-and-Conquer: Partitioning for Selection and Sorting )

All of the above algorithms are such that the total work done by themisω(n 2 m 2 ). (The work done by a parallel algorithm that uses p processors and

Topic 17. Analysis of Algorithms

Approximation Algorithms (Load Balancing)

NP-Completeness. Until now we have been designing algorithms for specific problems

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Determine the size of an instance of the minimum spanning tree problem.

Exercise Sheet #1 Solutions, Computer Science, A M. Hallett, K. Smith a k n b k = n b + k. a k n b k c 1 n b k. a k n b.

Introduction. An Introduction to Algorithms and Data Structures

Algorithms. Grad Refresher Course 2011 University of British Columbia. Ron Maharik

How many hours would you estimate that you spent on this assignment?

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

COMP 382: Reasoning about algorithms

Powerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Algorithms and Their Complexity

3. Algorithms. What matters? How fast do we solve the problem? How much computer resource do we need?

ITEC2620 Introduction to Data Structures

Essential facts about NP-completeness:

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

Complexity (Pre Lecture)

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

Ch 01. Analysis of Algorithms

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

Example: Fib(N) = Fib(N-1) + Fib(N-2), Fib(1) = 0, Fib(2) = 1

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

Algorithm runtime analysis and computational tractability

P, NP, NP-Complete. Ruth Anderson

1 Quick Sort LECTURE 7. OHSU/OGI (Winter 2009) ANALYSIS AND DESIGN OF ALGORITHMS

Lecture 9: Counting Matchings

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Lecture 8: Path Technology

COMP 182 Algorithmic Thinking. Algorithm Efficiency. Luay Nakhleh Computer Science Rice University

Data Structures and Algorithms. Asymptotic notation

Data structures Exercise 1 solution. Question 1. Let s start by writing all the functions in big O notation:

1 Caveats of Parallel Algorithms

2.1 Laplacian Variants

Ch01. Analysis of Algorithms

Recap: Prefix Sums. Given A: set of n integers Find B: prefix sums 1 / 86

Computational Complexity - Pseudocode and Recursions

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Divide and conquer. Philip II of Macedon

Algorithms Design & Analysis. Analysis of Algorithm

CS6901: review of Theory of Computation and Algorithms

CMPT 307 : Divide-and-Conqer (Study Guide) Should be read in conjunction with the text June 2, 2015

MA/CSSE 473 Day 05. Factors and Primes Recursive division algorithm

Parallelism and Machine Models

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Midterm 1 for CS 170

PRAM lower bounds. 1 Overview. 2 Definitions. 3 Monotone Circuit Value Problem

CS Analysis of Recursive Algorithms and Brute Force

Lecture 8: Complete Problems for Other Complexity Classes

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Introduction to Randomized Algorithms: Quick Sort and Quick Selection

CSCI 1590 Intro to Computational Complexity

CS151 Complexity Theory

A Complexity Theory for Parallel Computation

An instantaneous code (prefix code, tree code) with the codeword lengths l 1,..., l N exists if and only if. 2 l i. i=1

Limits to Approximability: When Algorithms Won't Help You. Note: Contents of today s lecture won t be on the exam

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

Solving recurrences. Frequently showing up when analysing divide&conquer algorithms or, more generally, recursive algorithms.

Probability & Computing

Problem-Solving via Search Lecture 3

Data Structures in Java

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

Complexity, Parallel Computation and Statistical Physics

1 Primals and Duals: Zero Sum Games

NP-Completeness. Algorithmique Fall semester 2011/12

CSE 373: Data Structures and Algorithms Pep Talk; Algorithm Analysis. Riley Porter Winter 2017

Lecture 7: More Arithmetic and Fun With Primes

NP and Computational Intractability

Discrete Mathematics Fall 2018 Midterm Exam Prof. Callahan. Section: NetID: Multiple Choice Question (30 questions in total, 4 points each)

CS3719 Theory of Computation and Algorithms

Transcription:

Help for Homework Problem #9 Let G(V,E) be any undirected graph We want to calculate the travel time across the graph. Think of each edge as one resistor of 1 Ohm. Say we have two nodes: i and j Let the effective resistance between i and j = R ij = where r x is the resistance of a path between i and j. Therefore R ij is one over the sum of one over the resistance of each path from i to j. Fact: The commute time between i and j is C ij C ij = 2mR ij where m = E Let the resistance of the graph be R = Max R ij Fact: The expected cover time C(G) = O(mRlogn) Fact: The R ij for any 2 nodes the length of the shortest path between i and j. For example: R ij = 2 Fact: For a d-regular graph with n nodes, the diameter Using all of these facts together, you can solve problem number 9. Example: Using the facts for a 2-SAT Modeled as a random walk on a graph m = n R = n Thus C(G) = O(n 2 logn) Note: this is worse than with Markov chains, but this method will result in a better result with d-regular graphs, as in the HW 1

Prefix Calculations Input: k 1, k 2, k n Output: k 1, k 1 k 2, k 1 k 2 k 3,, k 1 k 2 k 3 k n Here is any binary, associative, and unit time operation. Recall that if is associative then, for any 3 elements x,y,z it should be true that (x y) z = x (y z) = x y z. Examples: 1. and is addition 2. and is multiplication 3. 2x2 matrices and is matrix multiplication 4. and is Min The best sequential run time = S = n 1 Algorithms for logarithmic time prefix calculations Algorithm 1 P = n CREW processors Split your input in 2. Then recursively perform prefix computations on each half. Overall the process works like: Note: k 2 = k 1 + k 2 k 3 = k 1 + k 2 + k 3 and so on for all to k n Analysis of this algorithm: Let T(n) be the runtime on any input of size n using n processors. 2

Then T(n) = T + O(1) = O(logn) PT = O(nlogn) O(n) therefore the algorithm is not work optimal. A work optimal algorithm: P = CREW PRAM processors Assign log n elements to each processor such that the 1 st log n elements go to p 1, 2 nd log n elements go to p 2, etc. 1. For 1 i in do Processor i computes the prefix sums (note: here sum refers to the elements operator) of its Let the results be k 1, k 2, k logn k n 2. Perform the prefix calculation on k logn, k 2logn, k n Let the results be k logn,, k 2logn, k n 3. For 1 i n in do Processor i pre-adds k (i-1)log n to every value it computed in step 1 Analysis: Step 1 takes log n time Step 2 takes O = O(logn) time Step 3 takes log n time Thus total time = O(logn) Thus, this algorithm is asymptotically optimal. Example: Input: X= k 1, k 2, k n (a sequence of elements) and an integer y Output: a rearrangement of X where all the elements y appear first, followed by all other elements An example: y = 10 8 11 7 3 15 9 16 2 4 Output: 3

8 7 3 9 2 4 11 15 16 The algorithm: Use a Boolean array A[1:n] such that A[i] = 1 if k i y = 0 otherwise for all 1 i n for the example: 1 0 1 1 0 1 0 1 1 Now perform a prefix sums (where sum refers to addition) on the array 1 1 2 3 3 4 4 5 6 Now you can use these prefix values as unique addresses for the elements that are y and place them in successive cells. We can use another similar prefix computation to place the remaining elements in successive cells. Can we do prefix calculations in O(1)? Nope! (Beam and Hastäd 1985) Theorem: Computing the parity of n bits needs Ω time on the CRCW PRAM given only a polynomial number of processors. (Cole and Vishkin 1983) We can solve the prefix additions problem in time using arbitrary CRCW PRAM processors, provided the elements are integers in the range [1, n c ] for any constant c. Example uses: Sorting: k 1, k 2, k n = X We can sort these in O(log n) time using CREW PRAM Processors. This is done by giving each element n processors and letting them calculate its rank, and then outputting the elements in the order of their ranks. Selection: Input: k 1, k 2, k n = X and an i such that 1 i n Output: The i th smallest element of X 4

Slow-down Lemma Lemma: Let A be a algorithm that uses p processors and runs in time T. The same algorithm can be run on p processors in time provided p p. Proof: Old Machine New Machine Assign processors of the old machine to each of the processors in the new machine. Each step of the old machine can be simulated in steps in the new machine. The runtime on this new machine is. 5