Week 7 Solution. The two implementations are 1. Approach 1. int fib(int n) { if (n <= 1) return n; return fib(n 1) + fib(n 2); } 2.

Similar documents
Algorithms Design & Analysis. Dynamic Programming

Today s Outline. CS 362, Lecture 13. Matrix Chain Multiplication. Paranthesizing Matrices. Matrix Multiplication. Jared Saia University of New Mexico

Graduate Algorithms CS F-09 Dynamic Programming

Dynamic Programming. p. 1/43

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Dynamic Programming( Weighted Interval Scheduling)

Data Structures and Algorithms

Partha Sarathi Mandal

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

Dynamic Programming. Cormen et. al. IV 15

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

CS 170, Fall 1997 Second Midterm Professor Papadimitriou

CSCI Honor seminar in algorithms Homework 2 Solution

Greedy Algorithms My T. UF

Section Summary. Sequences. Recurrence Relations. Summations Special Integer Sequences (optional)

Data Structure Lecture#4: Mathematical Preliminaries U Kang Seoul National University

What is Performance Analysis?

CSE 421 Dynamic Programming

Solving Recurrences. Lecture 23 CS2110 Fall 2011

CS173 Running Time and Big-O. Tandy Warnow

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Dynamic Programming. Prof. S.J. Soni

CS473 - Algorithms I

Chapter 8 Dynamic Programming

Discrete Mathematics: Logic. Discrete Mathematics: Lecture 17. Recurrence

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Coin Changing: Give change using the least number of coins. Greedy Method (Chapter 10.1) Attempt to construct an optimal solution in stages.

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

COMP 555 Bioalgorithms. Fall Lecture 3: Algorithms and Complexity

CS 561, Lecture: Greedy Algorithms. Jared Saia University of New Mexico

Today s Outline. CS 561, Lecture 15. Greedy Algorithms. Activity Selection. Greedy Algorithm Intro Activity Selection Knapsack

Chapter 8 Dynamic Programming

Data Structure Lecture#4: Mathematical Preliminaries U Kang Seoul National University

Dynamic Programming (CLRS )

= 1 when k = 0 or k = n. Otherwise, Fact 4.5 (page 98) says. n(n 1)(n 2) (n k + 3)(n k + 2)(n k + 1) (n k + 1)(n k + 2)(n k + 3) (n 2)(n 1) n

Algorithms and Data Structures 2014 Exercises week 5

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Dynamic Programming: Matrix chain multiplication (CLRS 15.2)

Lecture 1 - Preliminaries

Lecture 7: Dynamic Programming I: Optimal BSTs

CSE 101. Algorithm Design and Analysis Miles Jones Office 4208 CSE Building Lecture 1: Introduction

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

CS481: Bioinformatics Algorithms

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

Introduction to Algorithms 6.046J/18.401J/SMA5503

1 Assembly Line Scheduling in Manufacturing Sector

6. DYNAMIC PROGRAMMING I

1 Closest Pair of Points on the Plane

CS583 Lecture 11. Many slides here are based on D. Luebke slides. Review: Dynamic Programming

Data Structures in Java

Tricks of the Trade in Combinatorics and Arithmetic

SHORT INTRODUCTION TO DYNAMIC PROGRAMMING. 1. Example

Determining an Optimal Parenthesization of a Matrix Chain Product using Dynamic Programming

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication

Math 4310 Solutions to homework 1 Due 9/1/16

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

Divide and Conquer: Polynomial Multiplication Version of October 1 / 7, 24201

The Divide-and-Conquer Design Paradigm

CS1800: Strong Induction. Professor Kevin Gold

Dynamic Programming. Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

8. Sequences, Series, and Probability 8.1. SEQUENCES AND SERIES

1 Examples of Weak Induction

CS2223 Algorithms D Term 2009 Exam 3 Solutions

Programming Abstractions

More Asymptotic Analysis Spring 2018 Discussion 8: March 6, 2018

University of the Virgin Islands, St. Thomas January 14, 2015 Algorithms and Programming for High Schoolers. Lecture 5

Learning Objectives

Algorithm Design and Analysis

CS 470/570 Dynamic Programming. Format of Dynamic Programming algorithms:

CS383, Algorithms Spring 2009 HW1 Solutions

Section 11.1: Sequences

Section Summary. Definition of a Function.

CSI Mathematical Induction. Many statements assert that a property of the form P(n) is true for all integers n.

2.3.6 Exercises NS-3. Q 1.1: What condition(s) must hold for p and q such that a, b, and c are always positive 10 and nonzero?

INFINITE SEQUENCES AND SERIES

Assignment 4. CSci 3110: Introduction to Algorithms. Sample Solutions

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

Asymptotic Running Time of Algorithms

Lecture 2: Divide and conquer and Dynamic programming

Divide and Conquer Strategy

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

Sequences and Series. Copyright Cengage Learning. All rights reserved.

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Knapsack and Scheduling Problems. The Greedy Method

Unit 1A: Computational Complexity

Mathematical Fundamentals

Vector, Matrix, and Tensor Derivatives

Binomial Coefficient Identities/Complements

Functions. Given a function f: A B:

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Chapter 4 Divide-and-Conquer

Math Circle: Recursion and Induction

Introduction. I Dynamic programming is a technique for solving optimization problems. I Key element: Decompose a problem into subproblems, solve them

Weighted Activity Selection

Recurrence Relations

Stochastic Processes

Solutions to the Midterm Practice Problems

Divide-and-Conquer Algorithms Part Two

Transcription:

Week 7 Solution 1.You are given two implementations for finding the nth Fibonacci number(f Fibonacci numbers are defined by F(n = F(n 1 + F(n 2 with F(0 = 0 and F(1 = 1 The two implementations are 1. Approach 1 int fib(int n { if (n <= 1 return n; return fib(n 1 + fib(n 2; 2. Approach 2 int fib(int n { /* array to store fibonacci numbers. */ int f[n+1]; int i; f[0] = 0; f[1] = 1; for (i = 2; i <= n; i++ { f[i] = f[i 1] + f[i 2]; return f[n]; Which of the two algorithms has better time complexity? 1. Approach 1 2. Approach 2 3. Both have same time complexity Answer : 2 Explanation : Approach 2 is linear time while approach 1 is exponential time 2.Consider the problem of matrix chain multiplication.

Let p 0,p 1,p 2,...p n be the dimension of the matrices A 1 A 2. A n such that dimension of A i is p i 1 x p i Given the structure of optimal solution as Ai...j = (A i. A k (A k+1. A j For 1 i j n, Let m[i,j] denote the minimum number of multiplications needed to compute Ai...j. The optimum cost can be described by which of the following recursive definition A. m[i,j] = min i k<j (m[i,k] + m[k+1,j] + p i 1 B. m[i,j] = min i k j (m[i,k] + m[k+1,j] + p i 1 C. m[i,j] = min i k<j (m[i,k] + m[k,j] + p i 1 D. m[i,j] = min i k<j (m[i,k 1] + m[k+1,j] + p i 1 Answer : A 3. Consider the following 4 matrices A : 5 x 4 B : 4 x 6 C : 6 x 2 D : 2 x 7 Using the recursive definition for matrix chain multiplication, compute the values for x and y in the table below

A. x = 88 and y = 120 B. x = 116 and y = 88 C. x = 120 and y = 116 D. x = 120 and y = 88 Answer : D 4.What is the total number of scalar multiplications required to multiply the 4 matrices defined in question above? A. 36 B. 168 C. 158 D. 104 Answer : C Explanation : Answer can be verified by filling the table using the recursive definition. 5. int fun(int n{ T[0]=T[1]=2; T[2]=2*T[0]*T[1];

for (int i=3;i<n;i++ T[i]=T[i 1]+2*T[i 1]*T[i 2]; return T[n] Which of the following corresponds to the space and time complexity for the above code? A. O(n & O(n B. O(n & O(n 2 C. O(n 2 & O(n Answer : A 6. int fun(int n{ T[0]=T[1]=2; T[2]=2*T[0]*T[1]; for (int i=3;i<n;i++ T[i]=T[i 1]+2*T[i 1]*T[i 2]; return T[n] If T[0]=T[1]=2, then for n>1 the recurrence relation for the above code can be given by: T[0]=T[1]=2. A. i=1n 12*T[i]*T[i 1] B. i=1n 12*T[i 1]*T[i 2] C. 2*T[i]*T[i 1] i=1n2*t[i]*t[i 1] 7.Given n types of coin denominations of values V 1 <V 2 <.. <. V n (integers. Let V 1 =1, so that for any amount of money M change can always be made. If T(j indicates the minimum number of coins requires to make a change for the amount of money equal to j and coin V k was the last denomination added to the solution. Then which recurrence best describes T(j? A. Min k {T(j V k +1 B. Min k {T(j V k C. T(j V k +1 Answer : A. Explanation :

if the coin denomination k was the last denomination added to the solution then the optimal way to finish the solution with that one is optimally make change for the amount of money j v k and then add one extra coin of value V k? 8. Consider the following set of denominations {1,3,4,5 in a currency.what is the number of coins returned by the greedy strategy to make change for 7 rupees? Also state what is the number of coins in an optimal solution? A. 3, 2 B. 2, 3 C. 3, 3 Answer : A. Explanation : Using the greedy strategy we ll get one coin of denomination 5 and 2 coins of denomination 1. The best solution would be to pick one coin of denomination 3 and one coin of denomination 4 9. We use dynamic programming approach when A. The solution has optimal substructure B. The problem can be has divided into subproblems which are overlapping. C. The problem can be has divided into subproblems and a global solution can be achieved by making locally optimal solutions D. A & C E. A & B Answer : E Explanation : DP needs optimal substructure and overlap between subproblems. 10. We use Greedy approach when A. The solution has optimal substructure B. The problem can be has divided into subproblems which are overlapping. C. The problem can be has divided into subproblems and a global solution can be achieved by making locally optimal solutions D. A & C E. A & B Answer : D Explanation : Greedy needs optimal substructure and global solution should be obtainable by making locally optimal solutions. 11. Find length of Shortest path from C to H?

A. 8 B. 5 C. 7 D. 4 Answer : B Explanation : C >E >F >H 12. Given above assembly line with 6 stations, find fastest time to get through entire factory A. 38 B. 36 C. 39 Answer : A Explanation : 13. A naive way to calculate the nth Fibonacci number is to use the definition of Fibonacci number F(n = F(n 1 + F(n 2, F(0 = F(1 = 1 We know that this algorithm is exponential, because we do a lot of repetitive computation. A DP solution to the above problem would give us a linear time algorithm. The number of calculations done by naive method in computing F(4 are? Answer : 9 Explanation:

F(4 = F(3 + F(2 F(3 = F(2 + F(1 F(2 = F(1 + F(0 F(4 = 8 calls in calls + 1 call to F(4 14. For the coin change problem, does greedy algorithm design strategy always result in the right answer? A. Yes B. No Answer : B Explanation : Optimality of greedy strategy is not guaranteed for arbitrary basic coins. For example: If basic coins were (1,3,4 To get coins worth 6 Greedy would give (4,1,1 while optimal would be (3,3