1 Lexers
(require parser-tools/lex) | package: parser-tools-lib |
1.1 Creating a Lexer
syntax
(lexer maybe-suppress-warnings [trigger action-expr] ...)
maybe-suppress-warnings =
| #:suppress-warnings trigger = re | (eof) | (special) | (special-comment) re = id | string | character | (repetition lo hi re) | (union re ...) | (intersection re ...) | (complement re) | (concatenation re ...) | (char-range char char) | (char-complement re) | (id datum ...)
The implementation of syntax-color/racket-lexer contains a lexer for the racket language. In addition, files in the "examples" sub-directory of the "parser-tools" collection contain simpler example lexers.
An re is matched as follows:
id —
expands to the named lexer abbreviation; abbreviations are defined via define-lex-abbrev or supplied by modules like parser-tools/lex-sre. string —
matches the sequence of characters in string. character —
matches a literal character. (repetition lo hi re) —
matches re repeated between lo and hi times, inclusive; hi can be +inf.0 for unbounded repetitions. (union re ...) —
matches if any of the sub-expressions match (intersection re ...) —
matches if all of the res match. (complement re) —
matches anything that re does not. (concatenation re ...) —
matches each re in succession. (char-range char char) —
matches any character between the two (inclusive); a single character string can be used as a char. (char-complement re) —
matches any character not matched by re. The sub-expression must be a set of characters re. (id datum ...) —
expands the lexer macro named id; macros are defined via define-lex-trans.
Note that both (concatenation) and "" match the empty string, (union) matches nothing, (intersection) matches any string, and (char-complement (union)) matches any single character.
The regular expression language is not designed to be used directly, but rather as a basis for a user-friendly notation written with regular expression macros. For example, parser-tools/lex-sre supplies operators from Olin Shivers’s SREs, and parser-tools/lex-plt-v200 supplies (deprecated) operators from the previous version of this library. Since those libraries provide operators whose names match other Racket bindings, such as * and +, they normally must be imported using a prefix:
The suggested prefix is :, so that :* and :+ are imported. Of course, a prefix other than : (such as re-) will work too.
Since negation is not a common operator on regular expressions, here are a few examples, using : prefixed SRE syntax:
(complement "1")
Matches all strings except the string "1", including "11", "111", "0", "01", "", and so on.
(complement (:* "1"))
Matches all strings that are not sequences of "1", including "0", "00", "11110", "0111", "11001010" and so on.
(:& (:: any-string "111" any-string) (complement (:or (:: any-string "01") (:+ "1")))) Matches all strings that have 3 consecutive ones, but not those that end in "01" and not those that are ones only. These include "1110", "0001000111" and "0111" but not "", "11", "11101", "111" and "11111".
(:: "/*" (complement (:: any-string "*/" any-string)) "*/")
Matches Java/C block comments. "/**/", "/******/", "/*////*/", "/*asg4*/" and so on. It does not match "/**/*/", "/* */ */" and so on. (:: any-string "*/" any-string) matches any string that has a "*/" in is, so (complement (:: any-string "*/" any-string)) matches any string without a "*/" in it.
(:: "/*" (:* (complement "*/")) "*/")
Matches any string that starts with "/*" and ends with "*/", including "/* */ */ */". (complement "*/") matches any string except "*/". This includes "*" and "/" separately. Thus (:* (complement "*/")) matches "*/" by first matching "*" and then matching "/". Any other string is matched directly by (complement "*/"). In other words, (:* (complement "xx")) = any-string. It is usually not correct to place a :* around a complement.
The start-pos, end-pos, lexeme, input-port, and return-without-pos forms have special meaning inside of a lexer.
The lexer raises an exception (exn:read) if none of the regular expressions match the input. Hint: If (any-char custom-error-behavior) is the last rule, then there will always be a match, and custom-error-behavior is executed to handle the error situation as desired, only consuming the first character from the input buffer.
In addition to returning characters, input ports can return eof-objects. Custom input ports can also return a special-comment value to indicate a non-textual comment, or return another arbitrary value (a special). The non-re trigger forms handle these cases:
The (eof) rule is matched when the input port returns an eof-object value. If no (eof) rule is present, the lexer returns the symbol 'eof when the port returns an eof-object value.
The (special-comment) rule is matched when the input port returns a special-comment structure. If no special-comment rule is present, the lexer automatically tries to return the next token from the input port.
The (special) rule is matched when the input port returns a value other than a character, eof-object, or special-comment structure. If no (special) rule is present, the lexer returns (void).
End-of-files, specials, special-comments and special-errors cannot be parsed via a rule using an ordinary regular expression (but dropping down and manipulating the port to handle them is possible in some situations).
Since the lexer gets its source information from the port, use port-count-lines! to enable the tracking of line and column information. Otherwise, the line and column information will return #f.
When peeking from the input port raises an exception (such as by an embedded XML editor with malformed syntax), the exception can be raised before all tokens preceding the exception have been returned.
Each time the racket code for a lexer is compiled (e.g. when a ".rkt" file containing a lexer form is loaded), the lexer generator is run. To avoid this overhead place the lexer into a module and compile the module to a ".zo" bytecode file.
If the lexer can accept the empty string, a message is sent to current-logger. These warnings can be disabled by giving the #:suppress-warnings flag.
> (define sample-input "( lambda (a ) (add_number a 42 ))") ; A function that partially tokenizes the sample input data
> (define (get-tokens a-lexer) (define p (open-input-string sample-input)) (list (a-lexer p) (a-lexer p) (a-lexer p) (a-lexer p) (a-lexer p))) ; A lexer that uses primitive operations directly
> (define the-lexer/primitive (lexer [(eof) eof] ["(" 'left-paren] [")" 'right-paren] [(repetition 1 +inf.0 numeric) (string->number lexeme)] [(concatenation (union alphabetic #\_) (repetition 0 +inf.0 (union alphabetic numeric #\_))) lexeme] ; invoke the lexer again to skip the current token [whitespace (the-lexer/primitive input-port)])) > (get-tokens the-lexer/primitive) '(left-paren "lambda" left-paren "a" right-paren)
; Another lexer that uses SRE operators but has the same functionality > (require (prefix-in : parser-tools/lex-sre))
> (define the-lexer/SRE (lexer [(eof) eof] ["(" 'left-paren] [")" 'right-paren] [(:+ numeric) (string->number lexeme)] [(:: (:or alphabetic #\_) (:* (:or alphabetic numeric #\_))) lexeme] [whitespace (the-lexer/SRE input-port)])) > (get-tokens the-lexer/SRE) '(left-paren "lambda" left-paren "a" right-paren)
Changed in version 7.7.0.7 of package parser-tools-lib: Add #:suppress-warnings flag.
syntax
(lexer-src-pos maybe-suppress-warnings [trigger action-expr] ...)
syntax
syntax
syntax
syntax
syntax
(define get-token (lexer-src-pos ... [(comment) (get-token input-port)] ...))
struct
(struct position (offset line col) #:extra-constructor-name make-position) offset : exact-positive-integer? line : exact-positive-integer? col : exact-nonnegative-integer?
struct
(struct position-token (token start-pos end-pos) #:extra-constructor-name make-position-token) token : any/c start-pos : position? end-pos : position?
1.2 Lexer Abbreviations and Macros
syntax
(char-set string)
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
syntax
(define-lex-abbrev id re)
syntax
(define-lex-abbrevs (id re) ...)
syntax
(define-lex-trans id trans-expr)
1.3 Lexer SRE Operators
(require parser-tools/lex-sre) | package: parser-tools-lib |
syntax
(* re ...)
syntax
(+ re ...)
syntax
(? re ...)
syntax
(= n re ...)
syntax
(>= n re ...)
syntax
(** n m re ...)
syntax
(or re ...)
syntax
(& re ...)
syntax
(- re ...)
syntax
(~ re ...)
syntax
(/ char-or-string ...)
1.4 Lexer Legacy Operators
(require parser-tools/lex-plt-v200) | |
package: parser-tools-lib |
syntax
(epsilon)
syntax
(~ re ...)
1.5 Tokens
Each action-expr in a lexer form can produce any kind of value, but for many purposes, producing a token value is useful. Tokens are usually necessary for inter-operating with a parser generated by parser-tools/yacc or parser-tools/cfg-parser, but tokens may not be the right choice when using lexer in other situations.
> (define-tokens basic-tokens (NUM ID)) > (define-empty-tokens punct-tokens (LPAREN RPAREN EOF))
> (define the-lexer/tokens (lexer [(eof) (token-EOF)] ["(" (token-LPAREN)] [")" (token-RPAREN)] [(:+ numeric) (token-NUM (string->number lexeme))] [(:: (:or alphabetic #\_) (:* (:or alphabetic numeric #\_))) (token-ID (string->symbol lexeme))] [whitespace (the-lexer/tokens input-port)])) ; Use get-tokens defined in Creating a Lexer > (get-tokens the-lexer/tokens) (list 'LPAREN (token 'ID 'lambda) 'LPAREN (token 'ID 'a) 'RPAREN)
syntax
(define-tokens group-id (token-id ...))
A token cannot be named error, since error has a special use in the parser.
syntax
(define-empty-tokens group-id (token-id ...))
procedure
(token-name t) → symbol?
t : (or/c token? symbol?)
procedure
(token-value t) → any/c
t : (or/c token? symbol?)