Εισαγωγή στη Λεκτική Ανάλυση - PDF

Description
Εισαγωγή στη Λεκτική Ανάλυση Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexical analyzers (lexers) Regular

Please download to get full document.

View again

of 71
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information
Category:

Creative Writing

Publish on:

Views: 31 | Pages: 71

Extension: PDF | Download: 1

Share
Transcript
Εισαγωγή στη Λεκτική Ανάλυση Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexical analyzers (lexers) Regular expressions Examples of regular expressions 2 Lexical Analysis What do we want to do? Example: if (i == j) then z = 0; else z = 1; The input is just a string of characters: if (i == j)\nthen\n\tz = 0;\n\telse\n\t\tz = 1; Goal: Partition input string into substrings where the substrings are tokens and classify them according to their role 3 What s a Token? A syntactic category In English: noun, verb, adjective, In a programming language: Identifier, Integer, Keyword, Whitespace, 4 Tokens Tokens correspond to sets of strings these sets depend on the programming language Identifier: strings of letters or digits, starting with a letter Integer: a non-empty string of digits Keyword: else or if or begin or Whitespace: a non-empty sequence of blanks, newlines, and tabs 5 What are Tokens Used for? Classify program substrings according to role Output of lexical analysis is a stream of tokens which is input to the parser Parser relies on token distinctions An identifier is treated differently than a keyword 6 Designing a Lexical Analyzer: Step 1 Define a finite set of tokens Tokens describe all items of interest Choice of tokens depends on language, design of parser Recall if (i == j)\nthen\n\tz = 0;\n\telse\n\t\tz = 1; Useful tokens for this expression: Integer, Keyword, Relation, Identifier, Whitespace, (, ), =, ; 7 Designing a Lexical Analyzer: Step 2 Describe which strings belong to each token Recall: Identifier: strings of letters or digits, starting with a letter Integer: a non-empty string of digits Keyword: else or if or begin or Whitespace: a non-empty sequence of blanks, newlines, and tabs 8 Lexical Analyzer: Implementation An implementation must do two things: 1. Recognize substrings corresponding to tokens 2. Return the value or lexeme of the token The lexeme is the substring 9 Example Recall: if (i == j)\nthen\n\tz = 0;\n\telse\n\t\tz = 1; Token-lexeme groupings: Identifier: i, j, z Keyword: if, then, else Relation: == Integer: 0, 1 (, ), =, ; single character of the same name 10 Why do Lexical Analysis? Dramatically simplify parsing The lexer usually discards uninteresting tokens that don t contribute to parsing E.g. Whitespace, Comments Converts data early Separate out logic to read source files Potentially an issue on multiple platforms Can optimize reading code independently of parser 11 True Crimes of Lexical Analysis Is it as easy as it sounds? Not quite! Look at some programming language history... 12 Lexical Analysis in FORTRAN FORTRAN rule: Whitespace is insignificant E.g., VAR1 is the same as VA R1 FORTRAN whitespace rule was motivated by inaccuracy of punch card operators 13 A terrible design! Example Consider DO 5 I = 1,25 DO 5 I = 1.25 The first is DO 5 I = 1, 25 The second is DO5I = 1.25 Reading left-to-right, the lexical analyzer cannot tell if DO5I is a variable or a DO statement until after, is reached 14 Lexical Analysis in FORTRAN. Lookahead. Two important points: 1. The goal is to partition the string This is implemented by reading left-to-right, recognizing one token at a time 2. Lookahead may be required to decide where one token ends and the next token begins Even our simple example has lookahead issues i vs. if = vs. == 15 Another Great Moment in Scanning History PL/1: Keywords can be used as identifiers: IF THEN THEN THEN = ELSE; ELSE ELSE = IF can be difficult to determine how to label lexemes 16 More Modern True Crimes in Scanning Nested template declarations in C++ vector vector int myvector vector vector int myvector (vector (vector (int myvector))) 17 Review The goal of lexical analysis is to Partition the input string into lexemes (the smallest program units that are individually meaningful) Identify the token of each lexeme Left-to-right scan lookahead sometimes required 18 Next We still need A way to describe the lexemes of each token A way to resolve ambiguities Is if two variables i and f? Is == two equal signs = =? 19 Regular Languages There are several formalisms for specifying tokens Regular languages are the most popular Simple and useful theory Easy to understand Efficient implementations 20 Languages Def. Let Σ be a set of characters. A language Λ over Σ is a set of strings of characters drawn from Σ (Σ is called the alphabet of Λ) 21 Examples of Languages Alphabet = English characters Language = English sentences Alphabet = ASCII Language = C programs Not every string on English characters is an English sentence Note: ASCII character set is different from English character set 22 Notation Languages are sets of strings Need some notation for specifying which sets of strings we want our language to contain The standard notation for regular languages is regular expressions 23 Atomic Regular Expressions Single character Epsilon ' c' = { c } ε = { } 24 Compound Regular Expressions Union Concatenation Iteration * { or } A+ B= s s A s B { and } AB= ab a A b B = U i i = i 0 A A where A A... i times... A 25 Regular Expressions Def. The regular expressions over Σ are the smallest set of expressions including ε ' c' where c A+ B where A, B are rexp over AB * A where A is a rexp over 26 Syntax vs. Semantics To be careful, we should distinguish syntax and semantics (meaning) of regular expressions L( ε) = { } L(' c') = { c } LA ( + B) = LA ( ) LB ( ) L( AB) = { ab a L( A) and b L( B)} LA = U LA * ( ) ( i ) i 0 27 Example: Keyword Keyword: else or if or begin or ' else' + 'if' + 'begin' + L Note: 'else' abbreviates 'e''l''s''e' 28 Example: Integers Integer: a non-empty string of digits digit = '0' + '1' + '2' + '3' + '4' + '5' + '6' + '7' + '8' + '9' integer = digit digit * Abbreviation: A + = AA * 29 Example: Identifier Identifier: strings of letters or digits, starting with a letter letter = 'A' + K+ 'Z' + 'a' + K+ 'z' identifier = letter (letter + digit) * * * Is (letter + digit ) the same? 30 Example: Whitespace Whitespace: a non-empty sequence of blanks, newlines, and tabs (' ' + '\n' + '\t') + 31 Example 1: Phone Numbers Regular expressions are all around you! Consider Σ = digits {+,,(,)} country = digit digit city = digit digit digit univ = digit digit digit extension = digit digit digit digit phone_num = + country city univ extension 32 Example 2: Addresses Consider = letters name = + letter address = name name '.' name '.' name 33 Summary Regular expressions describe many useful languages Regular languages are a language specification We still need an implementation Next: Given a string s and a regular expression R, is s L( R)? A yes/no answer is not enough! Instead: partition the input into tokens We will adapt regular expressions to this goal 34 Υλοποίηση της Λεκτικής Ανάλυσης Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs) Implementation of regular expressions RegExp NFA DFA Tables 36 Notation For convenience, we will use a variation (we will allow user-defined abbreviations) in regular expression notation Union: A + B A B Option: A + ε A? Range: a + b + + z [a-z] Excluded range: complement of [a-z] [^a-z] 37 Regular Expressions Lexical Specifications 1. Select a set of tokens Integer, Keyword, Identifier, LeftPar, Write a regular expression (pattern) for the lexemes of each token Integer = digit + Keyword = if + else + Identifier = letter (letter + digit)* LeftPar = ( 38 Regular Expressions Lexical Specifications 3. Construct R, a regular expression matching all lexemes for all tokens R = Keyword + Identifier + Integer + = R 1 + R 2 + R 3 + Facts: If s L(R) then s is a lexeme Furthermore s L(R i ) for some i This i determines the token that is reported 39 Regular Expressions Lexical Specifications 4. Let input be x 1 x n (x 1... x n are characters in the language alphabet) For 1 i n check x 1 x i L(R)? 5. It must be that x 1 x i L(R j ) for some i and j (if there is a choice, pick a smallest such j) 6. Report token j, remove x1 x i from input and go to step 4 40 How to Handle Spaces and Comments? 1. We could create a token Whitespace Whitespace = ( + \n + \t ) + We could also add comments in there An input \t\n 555 is transformed into Whitespace Integer Whitespace 2. Lexical analyzer skips spaces (preferred) Modify step 5 from before as follows: It must be that x k... x i L(R j ) for some j such that x 1... x k-1 L(Whitespace) Parser is not bothered with spaces 41 Ambiguities (1) There are ambiguities in the algorithm How much input is used? What if x 1 x i L(R) and also x 1 x K L(R) The maximal munch rule: Pick the longest possible substring that matches R 42 Ambiguities (2) Which token is used? What if x 1 x i L(R j ) and also x 1 x i L(R k ) Rule: use rule listed first (j if j k) Example: R 1 = Keyword and R 2 = Identifier if matches both Treats if as a keyword not an identifier 43 Error Handling What if No rule matches a prefix of input? Problem: Can t just get stuck Solution: Write a rule matching all bad strings Put it last Lexical analysis tools allow the writing of: R = R R n + Error Token Error matches if nothing else matches 44 Summary Regular expressions provide a concise notation for string patterns Use in lexical analysis requires small extensions To resolve ambiguities To handle errors Good algorithms known (next) Require only single pass over the input Few operations per character (table lookup) 45 Regular Languages & Finite Automata Basic formal language theory result: Regular expressions and finite automata both define the class of regular languages. Thus, we are going to use: Regular expressions for specification Finite automata for implementation (automatic generation of lexical analyzers) 46 Finite Automata A finite automaton is a recognizer for the strings of a regular language A finite automaton consists of A finite input alphabet Σ A set of states S A start state n A set of accepting states F S A set of transitions state input state 47 Finite Automata Transition Is read s 1 a s 2 In state s 1 on input a go to state s 2 If end of input If in accepting state accept Otherwise If no transition possible reject 48 Finite Automata State Graphs A state The start state An accepting state A transition a 49 A Simple Example A finite automaton that accepts only 1 1 A finite automaton accepts a string if we can follow transitions labeled with the characters in the string from the start to some accepting state 50 Another Simple Example A finite automaton accepting any number of 1 s followed by a single 0 Alphabet: {0,1} And Another Example Alphabet {0,1} What language does this recognize? And Another Example Alphabet still { 0, 1 } 1 1 The operation of the automaton is not completely defined by the input On input 11 the automaton could be in either state 53 Epsilon Moves Another kind of transition: ε-moves A ε B Machine can move from state A to state B without reading input 54 Deterministic and Non-Deterministic Automata Deterministic Finite Automata (DFA) One transition per input per state No ε-moves Non-deterministic Finite Automata (NFA) Can have multiple transitions for one input in a given state Can have ε-moves Finite automata have finite memory Enough to only encode the current state 55 Execution of Finite Automata A DFA can take only one path through the state graph Completely determined by input NFAs can choose Whether to make ε-moves Which of multiple transitions for a single input to take 56 Acceptance of NFAs An NFA can get into multiple states Input: Rule: NFA accepts an input if it can get in a final state 57 NFA vs. DFA (1) NFAs and DFAs recognize the same set of languages (regular languages) DFAs are easier to implement There are no choices to consider 58 NFA vs. DFA (2) For a given language the NFA can be simpler than the DFA NFA DFA DFA can be exponentially larger than NFA (contrary to what is shown in the above example) Regular Expressions to Finite Automata High-level sketch NFA Regular expressions DFA Lexical Specification Table-driven Implementation of DFA 60 Regular Expressions to NFA (1) For each kind of reg. expr, define an NFA Notation: NFA for regular expression M M i.e. our automata have one start and one accepting state For ε For input a ε a 61 Regular Expressions to NFA (2) For AB A ε B For A + B ε ε B A ε ε 62 Regular Expressions to NFA (3) For A* ε ε A ε 63 Example of Regular Expression NFA conversion Consider the regular expression (1+0)*1 The NFA is ε A ε B ε ε C D 1 0 E F ε ε G H ε I 1 J ε 64 NFA to DFA. The Trick Simulate the NFA Each state of DFA = a non-empty subset of states of the NFA Start state = the set of NFA states reachable through ε-moves from NFA start state Add a transition S a S to DFA iff S is the set of NFA states reachable from any state in S after seeing the input a considering ε-moves as well 65 NFA to DFA. Remark An NFA may be in many states at any time How many different states? If there are N states, the NFA must be in some subset of those N states How many subsets are there? 2 N - 1 = finitely many 66 NFA to DFA Example A ε B ABCDHI ε ε 0 1 C D ε ε 1 0 E F ε ε FGABCDHI 0 1 EJGABCDHI ε 1 G H I J Implementation A DFA can be implemented by a 2D table T One dimension is states Other dimension is input symbols For every transition S i a S k define T[i,a] = k DFA execution If in state S i and input a, read T[i,a] = k and skip to state S k Very efficient 68 Table Implementation of a DFA S 0 1 T 0 1 U S T U T T U U T U 69 Implementation (Cont.) NFA DFA conversion is at the heart of tools such as lex, ML-Lex, flex, JLex,... But, DFAs can be huge In practice, lex-like tools trade off speed for space in the choice of NFA to DFA conversion 70 Theory vs. Practice Two differences: DFAs recognize lexemes. A lexer must return a type of acceptance (token type) rather than simply an accept/reject indication. DFAs consume the complete string and accept or reject it. A lexer must find the end of the lexeme in the input stream and then find the next one, etc. 71
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks