MAKING A BINARY HEAP

Similar documents
MAKING A BINARY HEAP

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 20: Dynamic Programming

Divide and Conquer Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 14

Algorithm Analysis Divide and Conquer. Chung-Ang University, Jaesung Lee

CS483 Design and Analysis of Algorithms

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Partha Sarathi Mandal

Divide-and-conquer. Curs 2015

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Lecture 7: Dynamic Programming I: Optimal BSTs

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 2331 Algorithms Steve Lai

Divide & Conquer. Jordi Cortadella and Jordi Petit Department of Computer Science

Data Structures in Java

Algorithms. Jordi Planes. Escola Politècnica Superior Universitat de Lleida

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Design and Analysis of Algorithms

Divide and Conquer Strategy

CSC236 Intro. to the Theory of Computation Lecture 7: Master Theorem; more D&C; correctness

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Divide and Conquer Algorithms

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Sorting Algorithms. We have already seen: Selection-sort Insertion-sort Heap-sort. We will see: Bubble-sort Merge-sort Quick-sort

Dynamic Programming. Reading: CLRS Chapter 15 & Section CSE 6331: Algorithms Steve Lai

Divide and Conquer Algorithms

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

IS 709/809: Computational Methods in IS Research Fall Exam Review

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Lecture 2: Divide and conquer and Dynamic programming

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Sorting. Chapter 11. CSE 2011 Prof. J. Elder Last Updated: :11 AM

Algorithm Design and Analysis

Question Paper Code :

CPS 616 DIVIDE-AND-CONQUER 6-1

Algorithms And Programming I. Lecture 5 Quicksort

data structures and algorithms lecture 2

Data Structures and Algorithms CSE 465

CS 161: Design and Analysis of Algorithms

Problem. Problem Given a dictionary and a word. Which page (if any) contains the given word? 3 / 26

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

The Divide-and-Conquer Design Paradigm

5. DIVIDE AND CONQUER I

NP-complete problems. CSE 101: Design and Analysis of Algorithms Lecture 20

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

CSE 4502/5717: Big Data Analytics

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Asymptotic Analysis. Slides by Carl Kingsford. Jan. 27, AD Chapter 2

5. DIVIDE AND CONQUER I

Dynamic Programming. Shuang Zhao. Microsoft Research Asia September 5, Dynamic Programming. Shuang Zhao. Outline. Introduction.

CMPS 2200 Fall Divide-and-Conquer. Carola Wenk. Slides courtesy of Charles Leiserson with changes and additions by Carola Wenk

CS Analysis of Recursive Algorithms and Brute Force

Outline. 1 Introduction. 3 Quicksort. 4 Analysis. 5 References. Idea. 1 Choose an element x and reorder the array as follows:

Data Structures and Algorithms

Data Structures and Algorithms CSE 465

Unit 1A: Computational Complexity

Design and Analysis of Algorithms

Integer Programming. Wolfram Wiesemann. December 6, 2007

CS361 Homework #3 Solutions

Lecture 10 September 27, 2016

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

R ij = 2. Using all of these facts together, you can solve problem number 9.

Single Source Shortest Paths

Midterm Exam 2 Solutions

Divide and Conquer. CSE21 Winter 2017, Day 9 (B00), Day 6 (A00) January 30,

Algorithms Test 1. Question 1. (10 points) for (i = 1; i <= n; i++) { j = 1; while (j < n) {

CMPT 307 : Divide-and-Conqer (Study Guide) Should be read in conjunction with the text June 2, 2015

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Note that M i,j depends on two entries in row (i 1). If we proceed in a row major order, these two entries will be available when we are ready to comp

Asymptotic Analysis and Recurrences

PRIME LABELING OF SMALL TREES WITH GAUSSIAN INTEGERS. 1. Introduction

Divide and Conquer Algorithms

Copyright 2000, Kevin Wayne 1

1 Quick Sort LECTURE 7. OHSU/OGI (Winter 2009) ANALYSIS AND DESIGN OF ALGORITHMS

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

CSE 421 Algorithms: Divide and Conquer

Design and Analysis of Algorithms

Circuits. CSE 373 Data Structures

CSE 202 Homework 4 Matthias Springer, A

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

CS Algorithms and Complexity

The maximum-subarray problem. Given an array of integers, find a contiguous subarray with the maximum sum. Very naïve algorithm:

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

CS 577 Introduction to Algorithms: Strassen s Algorithm and the Master Theorem

An analogy from Calculus: limits

I may not have gone where I intended to go, but I think I have ended up where I needed to be. Douglas Adams

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Multidimensional Divide and Conquer 1 Skylines

Lecture 4: An FPTAS for Knapsack, and K-Center

Review Of Topics. Review: Induction

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Linear Time Selection

Nearest Neighbor Search with Keywords

Chapter 8 Dynamic Programming

Data Structures and Algorithm Analysis (CSC317) Randomized Algorithms (part 3)

Problem One: Order Relations i. What three properties does a binary relation have to have to be a partial order?

Analysis of Algorithms I: Asymptotic Notation, Induction, and MergeSort

Fundamental Algorithms

Transcription:

CSE 101 Algorithm Design and Analysis Miles Jones mej016@eng.uc sd.edu Office 4208 CSE Building Lecture 19: Divide and Conquer Design examples/dynamic Programming

MAKING A BINARY HEAP Base case. Break the problem up. Recursively solve each problem. Assume the algorithm works for the subproblems Combine the results.

MAKING A BINARY HEAP Let s assume that n = 2 k 1. Make a binary heap out of the list of objects: [ o 1, k 1,, o n, k n ]. Put (o 1, k 1 ) aside and break the remaining part into 2 each of size 2 k 1 1 Assume our algorithm works on the two subproblems This results in two binary heaps. Then make (o 1, k 1 ) the root and let it trickle down.

MAKING A BINARY HEAP

BINARY HEAP (DELETEMIN) The object with the minimum key value is guaranteed to be the root. Once you take it out, you must reorder the tree. You replace the root with the last object and let it trickle down. [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

BINARY HEAP (DELETEMIN) D 8 A 10 The object with the minimum key value is guaranteed to be the root. Once you take it out, you must reorder the tree. You replace the root with the last object and let it trickle down. J 11 B 10 H 8 E 12 K 12 C 12 L 9 Q 14 N 14 G 22 O 26 M 13 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, A 10, E 12, N 14, G 22 ]

M 13 BINARY HEAP (DELETEMIN) D 8 A 10 The object with the minimum key value is guaranteed to be the root. Once you take it out, you must reorder the tree. You replace the root with the last object and let it trickle down. J 11 B 10 H 8 E 12 K 12 C 12 L 15 Q 14 N 14 G 22 O 26 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

BINARY HEAP (DELETEMIN) D 8 The object with the minimum key value is guaranteed to be the root. Once you take it out, you must reorder the tree. You replace the root with the last object and let it trickle down. J 11 B 10 A M 10 13 H 8 E 12 K 12 C 12 L 15 Q 14 N 14 G 22 O 26 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

BINARY HEAP (DELETEMIN) D 8 The object with the minimum key value is guaranteed to be the root. Once you take it out, you must reorder the tree. You replace the root with the last object and let it trickle down. J 11 H 8 A 10 B 10 M 13 E 12 K 12 C 12 L 15 Q 14 N 14 G 22 O 26 I 16 [M 13, D 8, B 10, J 11, C 12, H 8, L 9, Q 14, A 10, I 16, O 26, K 12, E 12, N 14, G 22 ]

MINIMUM DISTANCE Given a list of coordinates in the plane, find the distance between the closest pair.

MINIMUM DISTANCE distance( x i, y i, (x j, y j )) = x i y i 2 + x j y j 2

MINIMUM DISTANCE Given a list of coordinates, [ x 1, y 1,, x n, y n ], find the distance between the closest pair. Brute force solution? min = 0 for i from 1 to n-1: for j from i to n: if min > distance( x i, y i, (x j, y j )) return min

MINIMUM DISTANCE Base case. Break the problem up. Recursively solve each problem. Assume the algorithm works for the subproblems Combine the results.

BASE CASE if n=2 then return distance( x 1, y 1, (x 2, y 2 ))

EXAMPLE y x m x

BREAK THE PROBLEM INTO SMALLER PIECES

BREAK THE PROBLEM INTO SMALLER PIECES We will break the problem in half. Sort the points by their x values. Then find a value x m such that half of the x values are on the left and half are on the right.

EXAMPLE y x m x

BREAK THE PROBLEM INTO SMALLER PIECES Usually the smaller pieces are each of size n/2. We will break the problem in half. Sort the points by their x values. Then find a value x m such that half of the x values are on the left and half are on the right. Perform the algorithm on each side. Assume our algorithm works!! What does that give us?

BREAK THE PROBLEM INTO SMALLER PIECES Usually the smaller pieces are each of size n/2. We will break the problem in half. Sort the points by their x values. Then find a value x m such that half of the x values are on the left and half are on the right. Perform the algorithm on each side. Assume our algorithm works!! What does that give us? It gives us the distance of the closest pair on the left and on the right and lets call them d L and d R

EXAMPLE y x m x

EXAMPLE y d L d R x m x

COMBINE How will we use this information to find the distance of the closest pair in the whole set?

COMBINE How will we use this information to find the distance of the closest pair in the whole set? We must consider if there is a closest pair where one point is in the left half and one is in the right half. How do we do this?

EXAMPLE y d L d R x m x

COMBINE How will we use this information to find the distance of the closest pair in the whole set? We must consider if there is a closest pair where one point is in the left half and one is in the right half. How do we do this? Let d = min(d L, d R ) and compare only the points (x i, y i ) such that x m d x i and x i x m + d. Worst case, how many points could this be?

EXAMPLE y d L d R d x m x

COMBINE STEP Let P m be the set of points within d of x m. Then P m may contain as many as n different points. So, to compare all the points in P m with each other would take many comparisons. So the runtime recursion is: n 2

COMBINE STEP Let P m be the set of points within d of x m. Then P m may contain as many as n different points. So, to compare all the points in P m with each other would take many comparisons. So the runtime recursion is: n 2 T n = 2T n + O n2 2 T n = O n 2 Can we improve the combine term?

EXAMPLE y d L d R d x m x

COMBINE STEP Given a point x, y P m, let s look in a 2d d rectangle with that point at its upper boundary: x, y How many points could possibly be in this rectangle?

COMBINE STEP Given a point x, y its upper boundary: P m, let s look in a 2d d rectangle with that point at There could not be more than 8 points total because if we divide the rectangle into 8 d squares then there can never be more than one point per square. 2 d 2 Why???

COMBINE STEP So instead of comparing (x, y) with every other point in P m we only have to compare it with the next 7 points lower than it. To gain quick access to these points, let s sort the points in P m by y values. Now, if there are k vertices in P m we have to sort the vertices in O(klog k) time and make at most 7k comparisons in O(k) time for a total combine step of O k log k. But we said in the worst case, there are n vertices in P m and so worst case, the combine step takes O(n log n) time.

COMBINE STEP But we said in the worst case, there are n vertices in P m and so worst case, the combine step takes O(n log n) time. Runtime recursion: T n = 2T n 2 + O(n log n)

COMBINE STEP But we said in the worst case, there are n vertices in P m and so worst case, the combine step takes O(n log n) time. Runtime recursion: T n = 2T n 2 + O(n log n) Can anyone improve on this runtime?

DYNAMIC PROGRAMMING Dynamic programming is an algorithmic paradigm in which a problem is solved by identifying a collection of subproblems and tackling them one by one, smallest first, using the answers to small problems to help figure out larger ones, until they are all solved. Examples:

DYNAMIC PROGRAMMING Dynamic programming is an algorithmic paradigm in which a problem is solved by identifying a collection of subproblems and tackling them one by one, smallest first, using the answers to small problems to help figure out larger ones, until they are all solved. Examples: findmax, findmin, fib2,

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS)

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS)

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Shortest distance from D to another node x will be denoted dist(x). Notice that the shortest distance from D to C is dist C =

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Shortest distance from D to another node x will be denoted dist(x). Notice that the shortest distance from D to C is dist C = min(dist E + 5, dist B + 2)

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Shortest distance from D to another node x will be denoted dist(x). Notice that the shortest distance from D to C is dist C = min(dist E + 5, dist B + 2) This kind of relation can be written for every node. Since it s a DAG, the arrows only go to the right so by the time we get to node x, we have all the information needed!!

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Step1: Define the subproblems: Step 2: Base Case: Step 3: express recursively: Step 4: order the subproblems

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) Step1: Define the subproblems: the distance to the ith vertex Step 2: Base Case: the distance to the first vertex to itself is 0 Step 3: express recursively: dist(v) = min (u,v) E dist u + l u, v Step 4: order the subproblems linearized order

DYNAMIC PROGRAMMING (SHORTEST PATH IN DAGS) initialize all dist(.) values to infinity dist(s):=0 for each v V\{s} in linearized order dist(v)= min (u,v) E dist u + l u, v Like D/C, this algorithm solves a family of subproblems. We start with dist(s)=0 and we get to the larger subproblems in linearized order by using the smaller subproblems.

DP (LONGEST INCREASING SUBSEQUENCE) Given a sequence of distinct positive integers a[1],,a[n] An increasing subsequence is a sequence a[i_1],,a[i_k] such that i_1< <i_k and a[i_1]< <a[i_k]. For Example: 15, 18, 8, 11, 5, 12, 16, 2, 20, 9, 10, 4 5, 16, 20 is an increasing subsequence. How long is the longest increasing subsequence?

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4 Now, instead of finding the longest increasing subsequence of a list of integers, we are finding the longest path in a DAG!!!!

DYNAMIC PROGRAMMING (LONGEST INCREASING SUBSEQUENCE) Step1: Define the subproblems: Step 2: Base Case: Step 3: express recursively: Step 4: order the subproblems

DYNAMIC PROGRAMMING (LONGEST INCREASING SUBSEQUENCE) Step1: Define the subproblems: L(k) will be the length of the longest increasing subsequence ending exactly at position k Step 2: Base Case: L(1) = 0 Step 3: express recursively: L(k) = 1+max({L[i]:(i,j) is an edge}) Step 4: order the subproblems from left to right

DP (LONGEST INCREASING SUBSEQUENCE) Finding longest path in a DAG: L[1]:=0 for j=1 n L[j]=1+max({L[i]:(i,j) is an edge}) prev(j)=i return max({l[j]})

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4

DP (LONGEST INCREASING SUBSEQUENCE) Let s make a DAG out of our example: 15 18 8 11 5 12 16 2 20 9 10 4

DP (LONGEST INCREASING SUBSEQUENCE) Finding longest path in a DAG: L[1]:=0 for j=1 n L[j]=max({L[i]:(i,j) is an edge}) prev(j)=i return max({l[j]}) How long does this take?

DP (LONGEST INCREASING SUBSEQUENCE) Finding longest path in a DAG: L[1]:=0 for j=1 n L[j]=max({L[i]:(i,j) is an edge}) prev(j)=i return max({l[j]}) How long does this take? To solve L[j]=max({L[i]:(i,j) is an edge}), we need to know L[i] for each edge (i,j) in E. This is equal to the indegree of j. So we sum over all vertices we get that j V so the runtime is O( E ). d in (j) = E

DP (LONGEST INCREASING SUBSEQUENCE) The runtime is dependent on the number of edges in the DAG. Note that if the sequence is increasing 1 2 3 4 5 If the sequenece is decreasing then 10 9 8 7 6..

DP (LONGEST INCREASING SUBSEQUENCE) The runtime is dependent on the number of edges in the DAG. What are the maximum and minimum number of edges?

DP (LONGEST INCREASING SUBSEQUENCE) The runtime is dependent on the number of edges in the DAG. Note that if the sequence is increasing then E = n 2 1 2 3 4 5 If the sequenece is decreasing then E =0 10 9 8 7 6..

DP (LONGEST INCREASING SUBSEQUENCE) What is the expected number of edges?