Outline Introduction to Bottom-Up Parsing Review LL parsing Shift-reduce parsing he LR parsing algorithm Constructing LR parsing tables 2 op-down Parsing: Review op-down parsing expands a parse tree from the start symbol to the leaves Always expand the leftmost non-terminal op-down Parsing: Review op-down parsing expands a parse tree from the start symbol to the leaves Always expand the leftmost non-terminal * he leaves at any po form a string βaγ β contains only terminals he input string is βbδ he prefix β matches he next token is b * () * 3 * () * 4
op-down Parsing: Review op-down parsing expands a parse tree from the start symbol to the leaves Always expand the leftmost non-terminal op-down Parsing: Review op-down parsing expands a parse tree from the start symbol to the leaves Always expand the leftmost non-terminal * he leaves at any po form a string βaγ β contains only terminals he input string is βbδ he prefix β matches he next token is b * he leaves at any po form a string βaγ β contains only terminals he input string is βbδ he prefix β matches he next token is b * * 5 6 Predictive Parsing: Review A predictive parser is described by a table For each non-terminal A and for each token b we specify a production A α When trying to expand A we use A αif b is the token that follows next Once we have the table he parsing algorithm is simple and fast No backtracking is necessary Constructing Predictive Parsing ables Consider the state S * βaγ With b the next token rying to match βbδ here are two possibilities: 1. oken b belongs to an expansion of A Any A αcan be used if b can start a string derived from α We say that b First(α) Or 7 8
Constructing Predictive Parsing ables (Cont.) 2. oken b does not belong to an expansion of A he expansion of A is empty and b belongs to an expansion of γ Means that b can appear after A in a derivation of the form S * βabω We say that b Follow(A) in this case What productions can we use in this case? Any A α can be used if α can expand to ε We say that ε First(A) in this case Computing First Sets Definition First(X) = { b X * bα } { ε X * ε } Algorithm sketch 1. First(b) = { b } 2. ε First(X) if X εis a production 3. ε First(X) if X A 1 A n and ε First(A i ) for 1 i n 4. First(α) First(X) if X A 1 A n α and ε First(A i ) for 1 i n 9 10 First Sets: xample Recall the grammar X ( ) Y First sets First( ( ) = { ( } First( ) ) = { ) } First( ) = { } First( ) = { } First( * ) = { * } X ε Y * ε First( ) = {, ( } First( ) = {, ( } First( X ) = {, ε } First( Y ) = { *, ε } Computing Follow Sets Definition Follow(X) = { b S * β X b δ } Intuition If X A B then First(B) Follow(A) and Follow(X) Follow(B) Also if B * ε then Follow(X) Follow(A) If S is the start symbol then $ Follow(S) 11 12
Computing Follow Sets (Cont.) Algorithm sketch 1. $ Follow(S) 2. First(β) - {ε} Follow(X) For each production A αx β 3. Follow(A) Follow(X) For each production A αx β where ε First(β) Follow Sets: xample Recall the grammar X ( ) Y X ε Y * ε First( ) = {, ( } First( ) = {, ( } First( X ) = {, ε } First( Y ) = { *, ε } Follow sets Follow( ) = {, ( } Follow( * ) = {, ( } Follow( ( ) = {, ( } Follow( ) = { ), $ } Follow( X ) = { $, ) } Follow( ) = {, ), $ } Follow( ) ) = {, ), $ } Follow( Y ) = {, ), $ } Follow( ) = { *,, ), $ } 13 14 Constructing LL(1) Parsing ables Construct a parsing table for CFG G For each production A αin G do: For each terminal b First(α) do [A, b] = α If ε First(α), for each b Follow(A) do [A, b] = α If ε First(α) and $ Follow(A) do [A, $] = α Constructing LL(1) ables: xample Recall the grammar X X ε ( ) Y Y * ε Where in the line of Y do we put Y *? In the lines of First(*) = { * } Where in the line of Y do we put Y ε? In the lines of Follow(Y) = { $,, ) } 15 16
Notes on LL(1) Parsing ables If any entry is multiply defined then G is not LL(1) If G is ambiguous If G is left recursive If G is not left-factored And in other cases as well Bottom Up Parsing For some grammars there is a simple parsing strategy: Predictive parsing Most programming language grammars are not LL(1) hus, we need more powerful parsing strategies 17 Bottom-Up Parsing Bottom-up parsing is more general than topdown parsing And just as efficient Builds on ideas in top-down parsing Preferred method in practice Also called LR parsing L means that tokens are read left to right R means that it constructs a rightmost derivation! An Introductory xample LR parsers don t need left-factored grammars and can also handle left-recursive grammars Consider the following grammar: ( ) Why is this not LL(1)? Consider the string: ( ) ( ) 19 20
he Idea LR parsing reduces a string to the start symbol by inverting productions: A Bottom-up Parse in Detail (1) () () ( ) str w input string of terminals repeat Identify β in str such that A βis a production (i.e., str = α βγ) Replace β by A in str (i.e., str w = α A γ) until str = S (the start symbol) OR all possibilities are exhausted ( ) ( ) 21 22 A Bottom-up Parse in Detail (2) A Bottom-up Parse in Detail (3) ( ) ( ) () () () () () () () () () () ( ) ( ) ( ) ( ) 23 24
A Bottom-up Parse in Detail (4) A Bottom-up Parse in Detail (5) ( ) ( ) () () () () () () () () () () () () () () () ( ) ( ) ( ) ( ) 25 26 A Bottom-up Parse in Detail (6) Important Fact #1 about Bottom-up Parsing ( ) () () () () () () () () An LR parser traces a rightmost derivation in reverse A rightmost derivation in reverse ( ) ( ) 27 28
Where Do Reductions Happen Fact #1 has an eresting consequence: Let αβγ be a step of a bottom-up parse Assume the next reduction is by using A β hen γ is a string of terminals Notation Idea: Split string o two substrings Right substring is as yet unexamined by parsing (a string of terminals) Left substring has terminals and non-terminals Why? Because αaγ αβγis a step in a right-most derivation he dividing po is marked by a I he I is not part of the string Initially, all input is unexamined: Ix 1 x 2... x n 29 30 Shift-Reduce Parsing Bottom-up parsing uses only two kinds of actions: Shift Shift Shift: Move I one place to the right Shifts a terminal to the left string (I ) ( I ) Reduce In general: ABC I xyz ABCx I yz 31 32
Reduce Shift-Reduce xample Reduce: Apply an inverse production at the right end of the left string I () ()$ ( ) If ( ) is a production, then ( ( ) I ) ( I ) In general, given A xy, then: Cbxy I ijk CbA I ijk ( ) ( ) 33 Shift-Reduce xample Shift-Reduce xample ( ) ( ) I () ()$ I () ()$ I () ()$ reduce I () ()$ reduce I () ()$ 3 times ( ) ( ) ( ) ( )
Shift-Reduce xample Shift-Reduce xample ( ) ( ) I () ()$ I () ()$ I () ()$ reduce I () ()$ reduce I () ()$ 3 times I () ()$ 3 times ( I ) ()$ reduce ( I ) ()$ reduce ( I ) ()$ ( ) ( ) ( ) ( ) Shift-Reduce xample Shift-Reduce xample ( ) ( ) I () ()$ I () ()$ I () ()$ reduce I () ()$ reduce I () ()$ 3 times I () ()$ 3 times ( I ) ()$ reduce ( I ) ()$ reduce ( I ) ()$ () I ()$ reduce () ( I ) ()$ () I ()$ reduce () I ()$ 3 times ( ) ( ) ( ) ( )
Shift-Reduce xample Shift-Reduce xample ( ) ( ) I () ()$ I () ()$ I () ()$ reduce I () ()$ reduce I () ()$ 3 times I () ()$ 3 times ( I ) ()$ reduce ( I ) ()$ reduce ( I ) ()$ () I ()$ reduce () ( I ) ()$ () I ()$ reduce () I ()$ 3 times I ()$ 3 times ( I )$ reduce ( I )$ reduce ( I )$ ( ) ( ) ( ) ( ) Shift-Reduce xample Shift-Reduce xample ( ) I () ()$ I () ()$ reduce I () ()$ I () ()$ reduce I () ()$ 3 times I () ()$ 3 times ( I ) ()$ reduce ( I ) ()$ reduce ( I ) ()$ () I ()$ reduce () ( I ) ()$ () I ()$ reduce () I ()$ 3 times I ()$ 3 times ( I )$ reduce ( I )$ reduce ( I )$ () I $ reduce () ( I )$ () I $ reduce () ( ) ( ) I $ accept ( ) ( )
he Stack Left string can be implemented by a stack op of the stack is the I Shift pushes a terminal on the stack Reduce pops 0 or more symbols off of the stack (production RHS) and pushes a nonterminal on the stack (production LHS) Key Question: o Shift or to Reduce? Idea: use a finite automaton (DFA) to decide when to or reduce he input is the stack he language consists of terminals and non-terminals We run the DFA on the stack and examine the resulting state X and the token tok after I If X has a transition labeled tok then If X is labeled with A βon tok then reduce 45 46 accept on $ LR(1) Parsing: An xample 0 1 2 3 ( 4 7 ) () on $, 8 10 6 ) ( 5 11 on $, 9 on ), () on ), I () ()$ I () ()$ I () ()$ (x3) ( I ) ()$ ( I ) ()$ () I ()$ () I ()$ (x3) ( I )$ ( I )$ () I $ () I $ accept Representing the DFA Parsers represent the DFA as a 2D table (Recall table-driven lexical analysis) Lines correspond to DFA states Columns correspond to terminals and nonterminals ypically columns are split o: hose for terminals: action table hose for non-terminals: goto table 48
Representing the DFA: xample he LR Parsing Algorithm he table for a fragment of our DFA: ( 3 4 6 ) 7 () on $, 5 on ), ( ) $ 3 s4 4 s5 g6 5 r r 6 s8 s7 7 r () r () 49 After a or reduce action we rerun the DFA on the entire stack his is wasteful, since most of the work is repeated Remember for each stack element on which state it brings the DFA LR parser maains a stack sym 1, state 1... sym n, state n state k is the final state of the DFA on sym 1 sym k 50 he LR Parsing Algorithm let I = w$ be initial input let j = 0 let DFA state 0 be the start state let stack = dummy, 0 repeat case action[top_state(stack), I[j]] of k: push I[j], k reduce X A: pop A pairs, push X, Goto[top_state(stack),X] accept: halt normally error: halt and report error LR Parsers Can be used to parse more grammars than LL Most programming languages grammars are LR LR Parsers can be described as a simple table here are tools for building the table How is the table constructed? 51 52