ENS Lyon Camp. Day 2. Basic group. Cartesian Tree. 26 October

Similar documents
Heaps and Priority Queues

Binary Search Trees. Motivation

Priority queues implemented via heaps

Chapter 5 Data Structures Algorithm Theory WS 2017/18 Fabian Kuhn

Advanced Implementations of Tables: Balanced Search Trees and Hashing

Assignment 5: Solutions

Dictionary: an abstract data type

Binary Search Trees. Lecture 29 Section Robb T. Koether. Hampden-Sydney College. Fri, Apr 8, 2016

Dictionary: an abstract data type

Fundamental Algorithms

Search Trees. Chapter 10. CSE 2011 Prof. J. Elder Last Updated: :52 AM

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm

25. Minimum Spanning Trees

25. Minimum Spanning Trees

Divide-and-Conquer Algorithms Part Two

Search Trees. EECS 2011 Prof. J. Elder Last Updated: 24 March 2015

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: Oct 26. Homework #2. ( Due: Nov 8 )

Fibonacci (Min-)Heap. (I draw dashed lines in place of of circular lists.) 1 / 17

A Simple Implementation Technique for Priority Search Queues

Amortized analysis. Amortized analysis

CS213d Data Structures and Algorithms

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Review Of Topics. Review: Induction

Analysis of Algorithms. Outline 1 Introduction Basic Definitions Ordered Trees. Fibonacci Heaps. Andres Mendez-Vazquez. October 29, Notes.

CS Data Structures and Algorithm Analysis

Quiz 1 Solutions. Problem 2. Asymptotics & Recurrences [20 points] (3 parts)

Data Structures and Algorithms " Search Trees!!

INF2220: algorithms and data structures Series 1

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Randomization in Algorithms and Data Structures

16. Binary Search Trees. [Ottman/Widmayer, Kap. 5.1, Cormen et al, Kap ]

AVL Trees. Manolis Koubarakis. Data Structures and Programming Techniques

Fibonacci Heaps These lecture slides are adapted from CLRS, Chapter 20.

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

16. Binary Search Trees. [Ottman/Widmayer, Kap. 5.1, Cormen et al, Kap ]

Sorting Algorithms. We have already seen: Selection-sort Insertion-sort Heap-sort. We will see: Bubble-sort Merge-sort Quick-sort

Part IA Algorithms Notes

Data Structures and Algorithms

Inge Li Gørtz. Thank you to Kevin Wayne for inspiration to slides

Each internal node v with d(v) children stores d 1 keys. k i 1 < key in i-th sub-tree k i, where we use k 0 = and k d =.

Notes on Logarithmic Lower Bounds in the Cell Probe Model

1. Introduction Bottom-Up-Heapsort is a variant of the classical Heapsort algorithm due to Williams ([Wi64]) and Floyd ([F64]) and was rst presented i

Lecture 5: Splay Trees

Biased Quantiles. Flip Korn Graham Cormode S. Muthukrishnan

1 Terminology and setup

Data Structures 1 NTIN066

Introduction to Algorithms March 10, 2010 Massachusetts Institute of Technology Spring 2010 Professors Piotr Indyk and David Karger Quiz 1

Problem. Maintain the above set so as to support certain

Recitation 7. Treaps and Combining BSTs. 7.1 Announcements. FingerLab is due Friday afternoon. It s worth 125 points.

CS361 Homework #3 Solutions

CPSC 320 Sample Final Examination December 2013

Mon Tue Wed Thurs Fri

/463 Algorithms - Fall 2013 Solution to Assignment 4

Lecture 14: Nov. 11 & 13

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Algorithms Theory. 08 Fibonacci Heaps

Insertion Sort. We take the first element: 34 is sorted

CS 240 Data Structures and Data Management. Module 4: Dictionaries

V. Adamchik 1. Recurrences. Victor Adamchik Fall of 2005

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Binary Search Trees. Dictionary Operations: Additional operations: find(key) insert(key, value) erase(key)

5 Spatial Access Methods

Lecture 2 September 4, 2014

Solution suggestions for examination of Logic, Algorithms and Data Structures,

past balancing schemes require maintenance of balance info at all times, are aggresive use work of searches to pay for work of rebalancing

CSCE 750, Spring 2001 Notes 2 Page 1 4 Chapter 4 Sorting ffl Reasons for studying sorting is a big deal pedagogically useful Λ the application itself

Functional Data Structures

Concrete models and tight upper/lower bounds

Solutions. Prelim 2[Solutions]

Algorithms. Jordi Planes. Escola Politècnica Superior Universitat de Lleida

Data Structures 1 NTIN066

Prelim 2[Solutions] Solutions. 1. Short Answer [18 pts]

Fast Sorting and Selection. A Lower Bound for Worst Case

CS 161: Design and Analysis of Algorithms

Ordered Dictionary & Binary Search Tree

Single Source Shortest Paths

Fundamental Algorithms

Fundamental Algorithms

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

Algorithm for exact evaluation of bivariate two-sample Kolmogorov-Smirnov statistics in O(nlogn) time.

PART III. Outline. Codes and Cryptography. Sources. Optimal Codes (I) Jorge L. Villar. MAMME, Fall 2015

5 Spatial Access Methods

Optimal Color Range Reporting in One Dimension

Advanced Algorithms (2IL45)

Algorithm Design and Analysis

Data Structure. Mohsen Arab. January 13, Yazd University. Mohsen Arab (Yazd University ) Data Structure January 13, / 86

Chapter 6. Self-Adjusting Data Structures

The Complexity of Constructing Evolutionary Trees Using Experiments

HW #4. (mostly by) Salim Sarımurat. 1) Insert 6 2) Insert 8 3) Insert 30. 4) Insert S a.

2-INF-237 Vybrané partie z dátových štruktúr 2-INF-237 Selected Topics in Data Structures

Slides for CIS 675. Huffman Encoding, 1. Huffman Encoding, 2. Huffman Encoding, 3. Encoding 1. DPV Chapter 5, Part 2. Encoding 2

8: Splay Trees. Move-to-Front Heuristic. Splaying. Splay(12) CSE326 Spring April 16, Search for Pebbles: Move found item to front of list

HW Graph Theory SOLUTIONS (hbovik) - Q

4.8 Huffman Codes. These lecture slides are supplied by Mathijs de Weerd

Lecture 1 : Data Compression and Entropy

data structures and algorithms lecture 2

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

Outline. Computer Science 331. Cost of Binary Search Tree Operations. Bounds on Height: Worst- and Average-Case

Week 5: Quicksort, Lower bound, Greedy

ENS Lyon Camp. Day 5. Basic group. C October

Transcription:

ENS Lyon Camp. Day 2. Basic group. Cartesian Tree. 26 October Contents 1 Cartesian Tree. Definition. 1 2 Cartesian Tree. Construction 1 3 Cartesian Tree. Operations. 2 3.1 Split............................................ 2 3.2 Merge........................................... 3 3.3 Insert........................................... 4 3.4 Delete........................................... 4 4 Treap with implicit key 4 4.1 Problem statement.................................... 4 4.2 Definition and Construction.............................. 4 4.3 Split and Merge..................................... 5 4.4 Application....................................... 6 1 Cartesian Tree. Definition. Max Heap is a binary tree that satisfies the heap property: if v is a parent node of, u then the key of node v is greater than key of u. From definition follows that the root has the maximal key. The similar definition is for Min heap. The Binary Search Tree (BST) property states that the key in node must be greater than all keys stored in the left subtree, and smaller than all keys in right subtree. Cartesian Tree is a binary tree such that every node stores two values x and y. Value x is named key and satisfies BST property; value y is named priority and satisfies heap property. 2 Cartesian Tree. Construction How to construct cartesian tree given the pairs (x i, y i )? Firstly, the algorithm sorts all pairs by their keys: x 1 < x 2 < < x n. Then it goes from left to right, if the pair (x k, y k ) was added on the last step than node stores this pair in the rightest node in the tree. (Because all keys x satisfy the BST property). Algorithm tries to add a new pair (x k+1, y k+1 ) as a right child of (x k, y k ) (it is possible that y k > y k+1 ). Otherwise, algorithm goes up to the parents in the tree until priority 1

Figure 1: Cartesian Tree. y k+1 is less than priority in current vertex. When the algorithm finds suitable vertex v, the right child of v becomes the left child of new node (x k+1, y k+1 ), and the new node becomes the new right child of v. The time complexity of this algorithm is O(n) (without time for sort). The cartesian tree is stored in this way: 1 s t r u c t Node { 2 Node* l e f t, r i g h t ; // l e f t and r i g h t s u b t r e e s 3 i n t x, y ; // key and p r i o r i t y 4 5 s t a t i c Node* n u l l ; // empty t r e e 8 typedef Node* pnode ; 3 Cartesian Tree. Operations. 3.1 Split. This operation allows splitting cartesian tree T by key k into two cartesian trees T 1 and T 2. T 1 contains all keys less than or equal to k and T 2 contains all keys greater than k. For the sake of brevity T.L and T.R will be the left and the right subtrees of T, orrespondingly. Imagine the case when the key k is greater than or equal to the key stored in the root: Left subtree of T 1 is equal to T.L. To find the right subtree of T 1, algorithm splits T.R recursively into T 1 and T 2; so T 1 is the desired subtree. 2

T 2 is equal to T 2. The opposite case is processed similarly. 1 void s p l i t ( pnode T, pnode &T1, pnode &T2, i n t x ) { 2 // empty t r e e 3 i f (T == Node : : n u l l ) { 4 l = Node : : n u l l ; 5 r = Node : : n u l l e l s e i f (T. x <= k ) { 8 s p l i t (T. r i g h t, T. r i g h t, T2, x ) ; 9 T1 = T; 10 } 11 e l s e { 12 s p l i t (T. l e f t, T1, T. l e f t, x ) ; 13 T2 = T; 14 } 15 } The time complexity of this operation is O(h) where h is the height of T. 3.2 Merge. This operation allows to merge two cartesian trees T 1 and T 2 into the one, with the condition that the biggest key in T 1 is less than or equal to the smallest key in T 2. The resulting tree T contains all keys from T 1 and T 2 trees. The root of the resulting tree T should have maximal priority y among all other vertices. Of course, it is either root of T 1 or root of T 2 with maximal y. To be specific, we consider the case when root of T 1 has the greater priority than root of T 2. The left subtree of T is left subtree of T 1, and the right subtree is equal to the merge of right subtree of T 1 and T 2. 1 merge ( pnode T1, pnode T2, pnode &T) { 2 // emtpy T1 3 i f (T1 == Node : : n u l l ) { 4 T = T2 ; 5 r eturn ; // empty T2 8 i f (T2 == Node : : n u l l ) { 9 T = T1 ; 10 r eturn ; 11 } 12 13 i f (T1 >y > T2 >y ) { 14 merge (T1 >r i g h t, T2, T1 >r i g h t ) ; 15 T = T1 ; 1 1 e l s e { 18 merge (T2 >l e f t, T1, T2 >l e f t ) ; 19 T = T2 ; 3

20 } 21 } The time complexity of this operation is O(h) where h is the maximal height of T 1 and T 2. 3.3 Insert. This operation insert element newnode into the tree T. The new element is the tree, which consists of only one node and tree T with inserting element is the merge of T and new element. But T might contain keys less and greater than key of the new element newn ode.x. Therefore algorithm should split the tree T by newn ode.x before merge. 1 void i n s e r t ( pnode &T, pnode newnode ) { 2 pnode T1, T2 ; 3 s p l i t (T, newnode >x, T1, T2) ; 4 merge (T1, newnode, T1) ; 5 merge (T1, T2, T) ; 3.4 Delete. This operation remove element node from the tree T. Algorithm splits the tree T two times to pick out the required element and merges remaining trees. 1 void d e l e t e ( pnode &T, pnode node ) { 2 pnode T1, T2, t r a s h ; 3 s p l i t (T, node >x, T1, T2) ; 4 s p l i t (T1, node >x eps, T1, t r a s h ) ; 5 merge (T1, T2, T) ; 4 Treap with implicit key 4.1 Problem statement. Consider an array with n elements. The queries might be very different. For example, insert element in arbitrary position, remove element in arbitary position or even remove the range of elements. Other queries are reverse or rearrange a range in the array. Of course, this queries should be procedeed effectively. The data structure, which solves this kinds of problems, is named Treap with implicit key. 4.2 Definition and Construction The treap with implicit key is cartesian tree, but each node of it stores the order number of node instead of x. It means, if someone arrange nodes in order of BST, x is the order number of node in this arrangment. Of course this modification maintain the invariant of binary search tree. But 4

there is one problem: insert and delete could change order, so it needs O(n) time to recalculate all keys in the worst case. Actually the key x is not stored in the tree. Instead of this, each node stores auxiliary value size, This value equals to the size of tree in the node, i.e. the total number of vertices in subtree of node including the node. The important observation is that the sum of sizes of all not-visited left subtrees on the path from the root to the node v plus the number of nodes in the left subtree of v equals to the key x of node. 1 s t r u c t Node { 2 Node* l e f t, r i g h t ; // l e f t and r i g h t s u b t r e e s 3 i n t y ; 4 i n t s i z e ; // s i z e o f the t r e e 5 6 s t a t i c Node* n u l l ; // empty t r e e 8 void r e c a l c ( ) { 9 s i z e = 1 + l >s i z e + r >s i z e ; 10 } 11 } 12 13 typedef Node* pnode ; Array is implemented with a treap with implicit key, index of element is a key in treap and the value of element is a priority. Like in array algorithm does not store index, but the order of elements is known. 4.3 Split and Merge. Now merge operation allows to merge two arbitrary trees, it corresponds to concatenation of arrays. Implementations of merge for cartesian tree does not use the x key. So the implementation in this case is exactly the same, except for recalculation of the sizes. Split operation divides tree T by key k into two trees T 1 and T 2 in the way that T 1 contains exactly k nodes. It corresponds to dividing array into two parts such that fist part contains exactly k elements. 1 void s p l i t ( pnode T, pnode &T1, pnode &T2, i n t k ) { 2 // empty t r e e case 3 i f (T == Node : : n u l l ) { 4 T1 = Node : : n u l l ; 5 T2 = Node : : n u l l ; 8 // l e f t s u b t r e e corresponds to p r e f i x o f the array with 9 // l e n g t h e q u a l s to s i z e o f t h i s s u b t r e e 10 i n t l e f t S i z e = T >l e f t >s i z e ; 11 12 // r e q u i r e d p r e f i x i s s h o r t e r or equal 13 i f ( l e f t S i z e >= k ) { 14 s p l i t (T >l e f t, k, T1, T >l e f t ) ; 15 T >r e c a l c ( ) ; 5

16 T2 = T; 1 } e l s e { // r e q u i r e d p r e f i x i s l o n g e r 18 // algorithm a l r e a d y cuts l e f t s u b t r e e and root, so 19 // the remaining number o f elements to cut i s k l e f t S i z e 1 20 s p l i t (T >r i g h t, k l e f t S i z e 1, T >r i g h t, T2) ; 21 T >r e c a l c ( ) ; 22 T1 = T; 23 } 24 } To add element into the position i of array, algorithm splits the correspondig tree by the key i into T 1, T 2 and merge three trees: T 1, new element and T 2. To remove element on the position i algorithm finds the corresponding vertex v, and replace v with merge of its children. 4.4 Application. It is possible to use treap with implicit key to solve RMQ/RSQ problem with the range update. The algorithm keeps two additional fields in node: one for the result of operation and one for inconsistency. Operation is calculated over all nodes in subtree of vertex, including vertex. It keeps up-todate in the similar way as size, it updates this field using the size fields of children. To get result of operation in some range, algorithm cuts the subtree corresponding given range with two splits, return the value from the root and merge all trees back. For the range update algorithm uses the same invariant as segment tree. Algorithm push inconsistency to children for each visit of the vertex and keeps up-to-date value for vertex that have inconsistensy. For example, this is the algorithm for reverse problem. The main query is to reverse elements in the given range. We will keep boolean field for inconsistency. The value 1 shows that the corresponding segment should be reversed. And push swaps the left and right subtree and flip boolean flag (xor flag with true). 1 // c a l l t h i s f u n c t i o n at the beginning o f s p l i t and merge c a l l s 2 void push ( pnode v ) { 3 i f (! i s R e v e r s e d ) 4 r eturn ; 5 6 swap ( v >l e f t, v >r i g h t ) ; i f ( v >l e f t!= Node : : n u l l ) // i f t h e r e e x i s t l e f t s u b t r e e 8 v >l e f t >i s R e v e r s e d ^= true ; // f l i p i n c o n s i s t e n c y 9 i f ( v >r i g h t!= Node : : n u l l ) // i f t h e r e e x i s t l e f t s u b t r e e 10 v >r i g h t >i s R e v e r s e d ^= true ; // f l i p i n c o n s i s t e n c y 11 12 i s R e v e r s e d = f a l s e ; 13 } 14 15 void r e v e r s e ( pnode &T, i n t l, i n t r ) { 16 pnode T1, T2, R; // R i s the range from l to r 1 18 s p l i t (T, T1, T2, l 1) ; 6

19 s p l i t (T2, R, T2, r ) ; 20 21 R >i s R e v e r s e d = true ; 22 23 merge (T1, R, T1) ; 24 merge (T1, T2, T) ; 25 }