MAKING A BINARY HEAP

Similar documents
MAKING A BINARY HEAP

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

CS483 Design and Analysis of Algorithms

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Divide-and-conquer. Curs 2015

Algorithm Analysis Divide and Conquer. Chung-Ang University, Jaesung Lee

Lecture 7: Dynamic Programming I: Optimal BSTs

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Partha Sarathi Mandal

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

CSC236 Intro. to the Theory of Computation Lecture 7: Master Theorem; more D&C; correctness

Data Structures in Java

Design and Analysis of Algorithms

Divide and Conquer Strategy

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Lecture 2: Divide and conquer and Dynamic programming

Algorithms. Jordi Planes. Escola Politècnica Superior Universitat de Lleida

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Sorting Algorithms. We have already seen: Selection-sort Insertion-sort Heap-sort. We will see: Bubble-sort Merge-sort Quick-sort

Divide and Conquer Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

IS 709/809: Computational Methods in IS Research Fall Exam Review

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Divide and Conquer Algorithms

Design and Analysis of Algorithms

Algorithm Design and Analysis

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

Question Paper Code :

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Sorting. Chapter 11. CSE 2011 Prof. J. Elder Last Updated: :11 AM

R ij = 2. Using all of these facts together, you can solve problem number 9.

NP-complete problems. CSE 101: Design and Analysis of Algorithms Lecture 20

Midterm Exam 2 Solutions

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

CPS 616 DIVIDE-AND-CONQUER 6-1

Algorithms And Programming I. Lecture 5 Quicksort

CSE 202 Homework 4 Matthias Springer, A

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Nearest Neighbor Search with Keywords

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Data Structures and Algorithms CSE 465

The Divide-and-Conquer Design Paradigm

Copyright 2000, Kevin Wayne 1

Properties of Context-Free Languages

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee

CS Analysis of Recursive Algorithms and Brute Force

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

An analogy from Calculus: limits

5. DIVIDE AND CONQUER I

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Midterm 1 for CS 170

Dynamic Programming. Shuang Zhao. Microsoft Research Asia September 5, Dynamic Programming. Shuang Zhao. Outline. Introduction.

CSE 4502/5717: Big Data Analytics

Fundamental Algorithms

Chapter 5 Divide and Conquer

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows:

Quiz 1 Solutions. (a) f 1 (n) = 8 n, f 2 (n) = , f 3 (n) = ( 3) lg n. f 2 (n), f 1 (n), f 3 (n) Solution: (b)

Data Structures and Algorithm Analysis (CSC317) Randomized Algorithms (part 3)

Data Structures and Algorithms CSE 465

COMP 355 Advanced Algorithms

Single Source Shortest Paths

5. DIVIDE AND CONQUER I

CS173 Running Time and Big-O. Tandy Warnow

Lecture 10 September 27, 2016

1 Assembly Line Scheduling in Manufacturing Sector

Review Of Topics. Review: Induction

COMP 355 Advanced Algorithms Algorithm Design Review: Mathematical Background

CSC236H Lecture 2. Ilir Dema. September 19, 2018

data structures and algorithms lecture 2

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

CSE 421 Algorithms: Divide and Conquer

Data Structures and Algorithms

Asymptotic Analysis and Recurrences

Design and Analysis of Algorithms

CS173 Lecture B, November 3, 2015

COL351: Analysis and Design of Algorithms (CSE, IITD, Semester-I ) Name: Entry number:

Unit 1A: Computational Complexity

Assignment 5: Solutions

Divide and Conquer. Recurrence Relations

Integer Programming. Wolfram Wiesemann. December 6, 2007

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

CSCE 750, Spring 2001 Notes 2 Page 1 4 Chapter 4 Sorting ffl Reasons for studying sorting is a big deal pedagogically useful Λ the application itself

On the Fine-grained Complexity of One-Dimensional Dynamic Programming

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

CMPT 307 : Divide-and-Conqer (Study Guide) Should be read in conjunction with the text June 2, 2015

Transcription:

CSE 101 Algorithm Design and Analysis Miles Jones mej016@eng.uc sd.edu Office 4208 CSE Building Lecture 19: Divide and Conquer Design examples/dynamic Programming

MAKING A BINARY HEAP Base case. Break the problem up. Recursively solve each problem. Assume the algorithm works for the subproblems Combine the results.

MAKING A BINARY HEAP Let s assume that n = 2 k 1. Make a binary heap out of the list of objects: [ o 1, k 1,, o n, k n ]. Put (o 1, k 1 ) aside and break the remaining part into 2 each of size 2 k 1 1 Assume our algorithm works on the two subproblems This results in two binary heaps. Then make (o 1, k 1 ) the root and let it trickle down.

MAKING A BINARY HEAP

BINARY HEAP Put the first object as the root of the two subtrees and let it trickle down [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

BINARY HEAP D 8 A 10 Put the first object as the root of the two subtrees and let it trickle down B 10 H 8 E 12 K 12 J 11 C 12 L 9 Q 14 N 14 G 22 O 26 I 16 M 13 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, A 10, E 12, N 14, G 22 ]

BINARY HEAP M 13 D 8 A 10 Put the first object as the root of the two subtrees and let it trickle down B 10 H 8 E 12 K 12 J 11 C 12 L 15 Q 14 N 14 G 22 O 26 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

BINARY HEAP D 8 Put the first object as the root of the two subtrees and let it trickle down B 10 M 13 H 8 E 12 A 10 K 12 J 11 C 12 L 15 Q 14 N 14 G 22 O 26 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

BINARY HEAP D 8 Put the first object as the root of the two subtrees and let it trickle down B 10 H 8 M 13 E 12 A 10 K 12 J 11 C 12 L 15 Q 14 N 14 G 22 O 26 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

MINIMUM DISTANCE Given a list of coordinates in the plane, find the distance between the closest pair.

MINIMUM DISTANCE distance( x i, y i, (x j, y j )) = x i y i 2 + x j y j 2

MINIMUM DISTANCE Given a list of coordinates, [ x 1, y 1,, x n, y n ], find the distance between the closest pair. Brute force solution? min = 0 for i from 1 to n-1: for j from i+1 to n: if min > distance( x i, y i, (x j, y j )) return min

MINIMUM DISTANCE Base case. Break the problem up. Recursively solve each problem. Assume the algorithm works for the subproblems Combine the results.

BASE CASE if n=2 then return distance( x 1, y 1, (x 2, y 2 ))

EXAMPLE y x m x

BREAK THE PROBLEM INTO SMALLER PIECES

BREAK THE PROBLEM INTO SMALLER PIECES We will break the problem in half. Sort the points by their x values. Then find a value x m such that half of the x values are on the left and half are on the right.

EXAMPLE y x m x

BREAK THE PROBLEM INTO SMALLER PIECES Usually the smaller pieces are each of size n/2. We will break the problem in half. Sort the points by their x values. Then find a value x m such that half of the x values are on the left and half are on the right. Perform the algorithm on each side. Assume our algorithm works!! What does that give us?

BREAK THE PROBLEM INTO SMALLER PIECES Usually the smaller pieces are each of size n/2. We will break the problem in half. Sort the points by their x values. Then find a value x m such that half of the x values are on the left and half are on the right. Perform the algorithm on each side. Assume our algorithm works!! What does that give us? It gives us the distance of the closest pair on the left and on the right and lets call them d L and d R

EXAMPLE y x m x

EXAMPLE y d L d R x m x

COMBINE How will we use this information to find the distance of the closest pair in the whole set?

COMBINE How will we use this information to find the distance of the closest pair in the whole set? We must consider if there is a closest pair where one point is in the left half and one is in the right half. How do we do this?

EXAMPLE y d L d R x m x

COMBINE How will we use this information to find the distance of the closest pair in the whole set? We must consider if there is a closest pair where one point is in the left half and one is in the right half. How do we do this? Let d = min(d L, d R ) and compare only the points (x i, y i ) such that x m d x i and x i x m + d. Worst case, how many points could this be?

EXAMPLE y d L d R d x m x

COMBINE STEP Let P m be the set of points within d of x m. Then P m may contain as many as n different points. So, to compare all the points in P m with each other would take many comparisons. So the runtime recursion is: n 2

COMBINE STEP Let P m be the set of points within d of x m. Then P m may contain as many as n different points. So, to compare all the points in P m with each other would take many comparisons. So the runtime recursion is: n 2 T n = 2T n + O n2 2 T n = O n 2 Can we improve the combine term?

EXAMPLE y d L d R d x m x

COMBINE STEP Given a point x, y P m, let s look in a 2d d rectangle with that point at its upper boundary: x, y How many points could possibly be in this rectangle?

COMBINE STEP Given a point x, y its upper boundary: P m, let s look in a 2d d rectangle with that point at There could not be more than 8 points total because if we divide the rectangle into 8 d squares then there can never be more than one point per square. 2 d 2 Why???

COMBINE STEP So instead of comparing (x, y) with every other point in P m we only have to compare it with the next 7 points lower than it. To gain quick access to these points, let s sort the points in P m by y values. Now, if there are k vertices in P m we have to sort the vertices in O(klog k) time and make at most 7k comparisons in O(k) time for a total combine step of O k log k. But we said in the worst case, there are n vertices in P m and so worst case, the combine step takes O(n log n) time.

COMBINE STEP But we said in the worst case, there are n vertices in P m and so worst case, the combine step takes O(n log n) time. Runtime recursion: T n = 2T n 2 + O(n log n)

log b n T n = O n d k=1 a b d k

COMBINE STEP But we said in the worst case, there are n vertices in P m and so worst case, the combine step takes O(n log n) time. Runtime recursion: T n = 2T n 2 + O(n log n) Can anyone improve on this runtime?

DYNAMIC PROGRAMMING Dynamic programming is an algorithmic paradigm in which a problem is solved by identifying a collection of subproblems and tackling them one by one, smallest first, using the answers to small problems to help figure out larger ones, until they are all solved. Examples:

DYNAMIC PROGRAMMING Dynamic programming is an algorithmic paradigm in which a problem is solved by identifying a collection of subproblems and tackling them one by one, smallest first, using the answers to small problems to help figure out larger ones, until they are all solved. Examples: findmax, findmin, fib2,

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS)

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS)

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Shortest distance from D to another node x will be denoted dist(x). Notice that the shortest distance from D to C is dist C =

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Shortest distance from D to another node x will be denoted dist(x). Notice that the shortest distance from D to C is dist C = min(dist E + 5, dist B + 2)

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Shortest distance from D to another node x will be denoted dist(x). Notice that the shortest distance from D to C is dist C = min(dist E + 5, dist B + 2) This kind of relation can be written for every node. Since it s a DAG, the arrows only go to the right so by the time we get to node x, we have all the information needed!!

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Step1: Define the subproblems: Step 2: Base Case: Step 3: express recursively: Step 4: order the subproblems

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Step1: Define the subproblems: the distance to the ith vertex Step 2: Base Case: the distance to the first vertex to itself is 0 Step 3: express recursively: dist(v) = min (u,v) E dist u + l u, v Step 4: order the subproblems linearized order

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) initialize all dist(.) values to infinity dist(s):=0 for each v V\{s} in linearized order dist(v)= min (u,v) E dist u + l u, v Like D/C, this algorithm solves a family of subproblems. We start with dist(s)=0 and we get to the larger subproblems in linearized order by using the smaller subproblems.

DP (LONGEST INCREASING SUBSEQUENCE) Given a sequence of distinct positive integers a[1],,a[n] An increasing subsequence is a sequence a[i_1],,a[i_k] such that i_1< <i_k and a[i_1]< <a[i_k]. For Example: 15, 18, 8, 11, 5, 12, 16, 2, 20, 9, 10, 4 5, 16, 20 is an increasing subsequence. How long is the longest increasing subsequence?

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4 Now, instead of finding the longest increasing subsequence of a list of integers, we are finding the longest path in a DAG!!!!

DYNAMIC PROGRAMMING (LONGEST INCREASING SUBSEQUENCE) Step1: Define the subproblems: Step 2: Base Case: Step 3: express recursively: Step 4: order the subproblems

DYNAMIC PROGRAMMING (LONGEST INCREASING SUBSEQUENCE) Step1: Define the subproblems: L(k) will be the length of the longest increasing subsequence ending exactly at position k Step 2: Base Case: L(1) = 0 Step 3: express recursively: L(k) = 1+max({L[i]:(i,j) is an edge}) Step 4: order the subproblems from left to right

DP (LONGEST INCREASING SUBSEQUENCE) Finding longest path in a DAG: L[1]:=0 for j=1 n L[j]=1+max({L[i]:(i,j) is an edge}) prev(j)=i return max({l[j]})

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4

DP (LONGEST INCREASING SUBSEQUENCE) Finding longest path in a DAG: L[1]:=0 for j=1 n L[j]=max({L[i]:(i,j) is an edge}) prev(j)=i return max({l[j]}) How long does this take?

DP (LONGEST INCREASING SUBSEQUENCE) Finding longest path in a DAG: L[1]:=0 for j=1 n L[j]=max({L[i]:(i,j) is an edge}) prev(j)=i return max({l[j]}) How long does this take? To solve L[j]=max({L[i]:(i,j) is an edge}), we need to know L[i] for each edge (i,j) in E. This is equal to the indegree of j. So we sum over all vertices we get that j V so the runtime is O( E ). d in (j) = E

DP (LONGEST INCREASING SUBSEQUENCE) The runtime is dependent on the number of edges in the DAG. Note that if the sequence is increasing 1 2 3 4 5 If the sequenece is decreasing then 10 9 8 7 6..

DP (LONGEST INCREASING SUBSEQUENCE) The runtime is dependent on the number of edges in the DAG. What are the maximum and minimum number of edges?

DP (LONGEST INCREASING SUBSEQUENCE) The runtime is dependent on the number of edges in the DAG. Note that if the sequence is increasing then E = n 2 1 2 3 4 5 If the sequenece is decreasing then E =0 10 9 8 7 6..

DP (LONGEST INCREASING SUBSEQUENCE) What is the expected number of edges?