lexertl: The Modular Lexical Analyser Generator

Introduction

lexertl is a modern, modular lexical analyser generator. See the wikipedia definition to learn what a lexical analyser is at http://en.wikipedia.org/wiki/Lexical_analysis.

Traditionally, programs such as lex generate source code as their output and even only support one kind of programming language. With lexertl I am seeking to offer much more flexibility than that by exposing the state machine that is generated from a supplied lex specification. By doing this the user has much more freedom in how they process the data, which means it becomes easy to:

lexertl compiles equally well using GCC as well as Visual C++ 7.1 and above.

Design Goals

C++11

lexertl was originally built using Visual C++ 6 with the intention to keep the coding style as simple as possible to remain compatible with the pitiful compilers of the day. Although I have strived to retain that goal, the library requirements have indeed moved beyond those limitations.

Just as Visual C++ gained reasonable C++03 conformance, C++11 has now slid into view. The new standard is being implemented rapidly and it surely can't be long before the usage of things like auto_ptr in lexertl will be more than a little frowned upon.

More seriously, C++11 specifies Unicode encodings for string literals and this will be invaluable when using lexertl in the future. When Visual C++ supports these literals then I will add a macro to enable new typedefs for the lexertl template classes.

What is a Lexical Analyser?

A lexical analyser is a program that takes text as input and outputs substrings it recognizes. Each substring is referred to as a 'token' and has a numeric identifier assigned to it to uniquely identify its type (string, int, keyword etc.). A rule is defined for each token that must be recognised using Regular Expressions.

Regular Expressions

A good way to learn how regular expressions really work is to use the lexertl::debug::dump() function (see a later section on this page for how). In general, people are so used to Perl style expressions they can be confused when things like look-ahead assertions are not supported when using a lexer generator. I will therefore show a couple of examples of subtler than the average regular expressions.

SQL Server Style Strings

'(''|[^'])*'
Will match SQL Server style strings. The state machine is shown below:

C Style Comment Recognition

\/\*([^*]|\*+[^/*])*\*+\/
Matches a C style comment:

Note that the same state machine can be achieved with the following shorter regex:

\/\*(.|\n)*?\*\/
However if you take that approach be careful to declare the pattern near the end of your lexer rules. The abstemious repeat operator is very powerful and if used early in your rules will take precedence over later greedy repeats.

(Currently) Supported Regular Expression Syntax

Character Representations

Sequence

Meaning

\a

Alert (bell).

\b

Backspace.

\e

ESC character, x1B.

\n

Newline.

\r

Carriage return.

\f

Form feed, x0C.

\t

Horizontal tab, x09.

\v

Vertical tab, x0B.

\octal

Character specified by a three-digit octal code.

\xhex

Character specified by a hexadecimal code.

\cchar

Named control character.

"..."

All characters taken as literals between double quotes, except escape sequences.

Character Classes and class-like Constructs

Sequence

Meaning

[...]

A single character listed or contained within a listed range.

Ranges can be combined with the {+} and {-} operators. For example [a-z]{+}[0-9] is the same as [0-9a-z] and [a-z]{-}[aeiou] is the same as [b-df-hj-np-tv-z].

[^...]

A single character not listed and not contained within a listed range.

.

Any character (defaults to [^\n] but can be changed to any character by omitting the dot_not_newline bit flag in the rules constructor).

\d

Digit character ([0-9]).

\D

Non-digit character ([^0-9]).

\s

Whitespace character ([ \t\n\r\f\v]).

\S

Non-whitespace character ([^ \t\n\r\f\v]).

\w

Word character ([a-zA-Z0-9_]).

\W

Non-word character ([^a-zA-Z0-9_]).

Unicode Character Classes

Sequence

Meaning

\p{C}

Other.

\p{Cc}

Other, Control.

\p{Cf}

Other, Format.

\p{Co}

Other, Private Use.

\p{Cs}

Other, Surrogate.

\p{L}

Letter.

\p{LC}

Letter, Cased.

\p{Ll}

Letter, Lowercase.

\p{Lm}

Letter, Modifier.

\p{Lo}

Letter, Other.

\p{Lt}

Letter, Titlecase.

\p{Lu}

Letter, Uppercase.

\p{M}

Mark.

\p{Mc}

Mark, Space Combining.

\p{Me}

Mark, Enclosing.

\p{Mn}

Mark, Nonspacing.

\p{N}

Number.

\p{Nd}

Number, Decimal Digit.

\p{Nl}

Number, Letter.

\p{No}

Number, Other.

\p{P}

Punctuation.

\p{Pc}

Punctuation, Connector.

\p{Pd}

Punctuation, Dash.

\p{Pe}

Punctuation, Close.

\p{Pf}

Punctuation, Final quote.

\p{Pi}

Punctuation, Initial quote.

\p{Po}

Punctuation, Other.

\p{Ps}

Punctuation, Open.

\p{S}

Symbol.

\p{Sc}

Symbol, Currency.

\p{Sk}

Symbol, Modifier.

\p{Sm}

Symbol, Math.

\p{So}

Symbol, Other.

\p{Z}

Separator.

\p{Zl}

Separator, Line.

\p{Zp}

Separator, Paragraph.

\p{Zs}

Separator, Space.

Alternation and Repetition

Sequence

Meaning

...|...

Try subpatterns in alternation.

*

Match 0 or more times (greedy).

+

Match 1 or more times (greedy).

?

Match 0 or 1 times (greedy).

{n}

Match exactly n times.

{n,}

Match at least n times (greedy).

{n,m}

Match at least n times but no more than m times (greedy).

*?

Match 0 or more times (abstemious).

+?

Match 1 or more times (abstemious).

??

Match 0 or 1 times (abstemious).

{n,}?

Match at least n times (abstemious).

{n,m}?

Match at least n times but no more than m times (abstemious).

{MACRO}

Include the regex MACRO in the current regex.

Anchors

Sequence

Meaning

^

Start of string or after a newline.

$

End of string or before a newline.

Grouping

Sequence

Meaning

(...)

Group a regular expression to override default operator precedence.

(?r-s:pattern)

Apply option r and omit option s while interpreting pattern. Options may be zero or more of the characters i, s, or x.

i means case-insensitive. -i means case-sensitive.

s alters the meaning of '.' to match any character whatsoever. -s alters the meaning of '.' to match any character except '\n'

x ignores comments and whitespace in patterns. Whitespace is ignored unless it is backslash-escaped, contained within ""s, or appears inside a character range.

These options can be applied globally at the rules level by passing a combination of the bit flags icase, dot_not_newline and skip_ws to the rules constructor.

(?# comment )

Omit everything within (). The first ) character encountered ends the pattern. It is not possible for the comment to contain a ) character. The comment may span lines.

The following POSIX character sets are supported: [:alnum:], [:alpha:], [:blank:], [:cntrl:], [:digit:], [:graph:], [:lower:], [:print:], [:punct:], [:space:], [:upper:] and [:xdigit:].

Matching

As you will see from the examples below, match_results or recursive_match_results are the structs passed to lookup. Like flex, end of input returns an id of 0 (you can set this to another value if you like using the eoi() method on the rules class) and an unrecognised token returns a single character. The id returned is npos in this case. This value is available from a static function npos() on match_results.

match_results has the following members:

    id_type id;
    id_type user_id;
    iter_type start;
    iter_type end;
    iter_type eoi;
    bool bol;
    id_type state;

recursive_match_results derives from match_results and adds the following:
    std::stack<id_type_pair> stack;
This struct is used when recursive rules have been defined.

Note that various typedefs are avaiable for match_results and they are used below. The typedefs follow the convention used by std::regex.

Examples

Construct a Lexer and Tokenise Input

#include "lexertl/generator.hpp"
#include <iostream>
#include "lexertl/lookup.hpp"

int main()
{
    lexertl::rules rules;
    lexertl::state_machine sm;

    rules.push("[0-9]+", 1);
    rules.push("[a-z]+", 2);
    lexertl::generator::build(rules, sm);

    std::string input("abc012Ad3e4");
    lexertl::smatch results(input.begin(), input.end());

    // Read ahead
    lexertl::lookup(sm, results);

    while (results.id != 0)
    {
        std::cout << "Id: " << results.id << ", Token: '" <<
            results.str () << "'\n";
        lexertl::lookup(sm, results);
    }

    return 0;
}

Use lexertl::iterator

#include "lexertl/generator.hpp"
#include <iostream>
#include "lexertl/iterator.hpp"
#include "lexertl/lookup.hpp"

int main()
{
    lexertl::rules rules;
    lexertl::state_machine sm;

    rules.push("[0-9]+", 1);
    rules.push("[a-z]+", 2);
    lexertl::generator::build(rules, sm);

    std::string input("abc012Ad3e4");
    lexertl::siterator iter(input.begin(), input.end(), sm);
    lexertl::siterator end;

    for (; iter != end; ++iter)
    {
        std::cout << "Id: " << iter->id << ", Token: '" <<
            iter->str() << "'\n";
    }

    return 0;
}

Restartable Lexing

It is possible to modify the results object to (re)start at a specific position. Because there are multiple state variables inside the results object, care needs to be taken to set it up correctly when you do this.

First of all the most important member to set is end. Although the lookup automatically copies this to start when fetching the next token, I recommend setting start at the same time for completeness (you could set id to npos too if you really wanted to!) If you are using "beginning of line" matching, then you need to set the bol flag accordingly, so that the lexer knows whether the next token starts at the beginning of a line or not. Finally, if you are using 'multi state' lexing, be sure to set the state member correctly.

Here is an example:

#include "lexertl/generator.hpp"
#include <iostream>
#include "lexertl/lookup.hpp"

int main()
{
    lexertl::rules rules;
    lexertl::state_machine sm;
    std::string input("can\ncmd\na cmd\ncmd again\nanother cmd");
    // This time we set the iterator to end(), because we are going to reset
    // it anyway.
    lexertl::smatch results(input.end(), input.end());

    rules.push("can", 1);
    rules.push("^cmd$", 2);
    rules.push("^cmd", 3);
    rules.push("cmd$", 4);
    rules.push("[a-z]+", 50);
    rules.push("\\s+", 100);
    lexertl::generator::build(rules, sm);

    // Skip the first 4 characters
    results.start = results.end = input.begin () + 4;
    results.bol = true;
    // Redundant statement, but just for example:
    results.state = 0;
    // Look ahead
    lexertl::lookup(sm, results);

    while (results.id != 0)
    {
        std::cout << "Id: " << results.id << ", Token: '" <<
            results.str() << "'\n";
        lexertl::lookup(sm, results);
    }

    return 0;
}

Construct a Lexer and Dump the State Machine

#include "lexertl/debug.hpp"
#include "lexertl/generator.hpp"
#include <iostream>

int main ()
{
    lexertl::rules rules;
    lexertl::state_machine sm;

    rules.push("[0-9]+", 1);
    rules.push("[a-z]+", 2);
    lexertl::generator::build(rules, sm);
    sm.minimise ();
    lexertl::debug::dump(sm, std::cout);
    return 0;
}

Generating a Char State Machine

If you wish to write your own code generator, you will find it useful to generate a char state machine as it contains states with simple transitions from character ranges to other states. Even in Unicode mode this allows you to observe real character ranges. In contrast state_machine has a two phase lookup and always slices unicode chars into bytes in order to compress the data.

#include "lexertl/debug.hpp"
#include "lexertl/generator.hpp"
#include <iostream>

int main ()
{
    lexertl::rules rules;
    lexertl::char_state_machine csm;

    rules.push("[0-9]+", 1);
    rules.push("[a-z]+", 2);
    lexertl::char_generator::build(rules, csm);
    lexertl::debug::dump(csm, std::cout);
    return 0;
}

See state_machine.hpp for the char_state_machine structure.

Lexing a file

There are two classes to support iterating through a file included with lexertl which are memory_file and stream_shared_iterator:

#include "lexertl/stream_shared_iterator.hpp"
#include <fstream>
#include "lexertl/generator.hpp"
#include <iostream>
#include "lexertl/lookup.hpp"

int main ()
{
    lexertl::rules rules;
    lexertl::state_machine sm;
    std::ifstream ifi("C:\\Ben\\Dev\\lexertl\\policy.txt");
    lexertl::stream_shared_iterator iter(ifi);
    lexertl::stream_shared_iterator end;
    lexertl::match_results<lexertl::stream_shared_iterator>
        results(iter, end);

    rules.push("[0-9]+", 1);
    rules.push("[a-z]+", 2);
    lexertl::generator::build(rules, sm);
    // Look ahead
    lexertl::lookup(sm, results);

    while (results.id != 0)
    {
        std::cout << "Id: " << results.id << ", Token: '" <<
            results.str() << "'\n";
        lexertl::lookup(sm, results);
    }

    return 0;
}

Code Generation

Currently there is a single code generator that generates C++ table based lookup:

#include "lexertl/generator.hpp"
#include "lexertl/generate_cpp.hpp"
#include <iostream>

int main ()
{
    lexertl::rules rules;
    lexertl::state_machine sm;

    rules.push("[0-9]+", 1);
    rules.push("[a-z]+", 2);
    lexertl::generator::build(rules, sm);
    sm.minimise();
    lexertl::table_based_cpp::generate_cpp("lookup", sm, false, std::cout);
    return 0;
}

If you want to generate code for another language, then you should easily be able to use this generator as a template.

Multiple Lexer 'Start States'

A lexer can have more than one state machine. This allows you to lex different tokens depending on context, thus allowing simple parsing to take place. To allow this, a 'start state' must be specified additionally at the beginning of the rules_.push() call and an 'exit state' at the end. If '*' is used as start state, then the rule is applied to all lexer states. If '.' is specified as the exit state, then the lexer state is unchanged when that rule matches.

To demonstrate, the classic example below shows matching multi-line C comments using start states:

#include "lexertl/generator.hpp"
#include "lexertl/lookup.hpp"

int main()
{
    lexertl::rules rules;
    lexertl::state_machine sm;

    rules.push_state("COMMENT");

    rules.push("INITIAL", "\"/*\"", "COMMENT");
    rules.push("COMMENT", "[^*]+|.", ".");
    rules.push("COMMENT", "\"*/\"", 1, "INITIAL");

    lexertl::generator::build(rules, sm);

    std::string input("/* test */");
    lexertl::smatch results(input.begin(), input.end());

    do
    {
        lexertl::lookup(sm, results);
    } while (results.id != 0);

    return 0;
}

Note that by omitting ids for all but the end of the sequence, we can get the lexer to continue even though it has changed state.

Recursive Rules

As I don't have the time to write a complimentary parser generator, I have added recursive rule support directly to lexertl. To make this work we need a couple of tweaks. To start with we need to introduce the concept of pushing and popping states. An exit state with '>' before the name means push, whereas '<' for the exit state means pop.

It turns out we also need a means for continuing even when a particular rule has matched. We achieve this by omitting an id when adding a rule. Finally, note that even when we specify an id (as normal) for the exit conditions, the rule will only finish when all previous pushes (with no id specified) are popped.

See the example below where all of these concepts are put together in order to parse recursive C style comments.

#include "lexertl/generator.hpp"
#include <iostream>
#include "lexertl/lookup.hpp"

int main()
{
    enum {eEOI, eComment};
    lexertl::rules rules;
    lexertl::state_machine sm;
    std::string input("/* /* recursive */*/");

    rules.push_state("OPEN");

    rules.push("INITIAL,OPEN", "[/][*]", ">OPEN");
    rules.push("OPEN", ".", ".");
    rules.push("OPEN", "[*][/]", 1, "<");
    lexertl::generator::build(rules, sm);

    lexertl::srmatch results(input.begin(), input.end());

    // Read ahead
    lexertl::lookup(sm, results);

    while (results.id != eEOI && results.id != results.npos());
    {
        switch (results.id)
        {
            case eComment:
                std::cout << results.str() << std::endl;
                break;
            default:
                std::cout << "Error at '" << &*results.start << "'\n";
                break;
        }

        lexertl::lookup(sm, results);
    }

    return 0;
}

Full Blown Grammars

A full grammar for parsing json is shown below:

#include "lexertl/generator.hpp"
#include "lexertl/lookup.hpp"
#include "lexertl/memory_file.hpp"
#include "lexertl/utf_iterators.hpp"

int main()
{
    try
    {
        typedef lexertl::basic_state_machine<char32_t> usm;
        typedef lexertl::basic_utf8_in_iterator<const char *, char32_t>
            utf_in_iter;
        typedef lexertl::basic_utf8_out_iterator<utf_in_iter>
            utf_out_iter;
        lexertl::rules rules;
        usm sm;
        lexertl::memory_file file("C:\\json.txt");
        const char *begin = file.data();
        const char *end = begin + file.size();
        lexertl::recursive_match_results<utf_in_iter> results(utf_in_iter(begin, end),
            utf_in_iter(end, end));
        enum {eEOF, eString, eNumber, eBoolean, eOpenOb, eName, eCloseOb,
            eOpenArr, eCloseArr, eNull};

        // http://code.google.com/p/bsn-goldparser/wiki/JsonGrammar
        rules.insert_macro("STRING", "[\"]([ -\\x10ffff]{-}[\"\\\\]|"
            "\\\\([\"\\\\/bfnrt]|u[0-9a-fA-F]{4}))*[\"]");
        rules.insert_macro("NUMBER", "-?(0|[1-9]\\d*)([.]\\d+)?([eE][-+]?\\d+)?");
        rules.insert_macro("BOOL", "true|false");
        rules.insert_macro("NULL", "null");

        rules.push_state("END");

        rules.push_state("OBJECT");
        rules.push_state("NAME");
        rules.push_state("COLON");
        rules.push_state("OB_VALUE");
        rules.push_state("OB_COMMA");

        rules.push_state("ARRAY");
        rules.push_state("ARR_COMMA");
        rules.push_state("ARR_VALUE");

        rules.push("INITIAL", "[{]", eOpenOb, ">OBJECT:END");
        rules.push("INITIAL", "[[]", eOpenArr, ">ARRAY:END");

        rules.push("OBJECT,OB_COMMA", "[}]", eCloseOb, "<");
        rules.push("OBJECT,NAME", "{STRING}", eName, "COLON");
        rules.push("COLON", ":", rules.skip(), "OB_VALUE");

        rules.push("OB_VALUE", "{STRING}", eString, "OB_COMMA");
        rules.push("OB_VALUE", "{NUMBER}", eNumber, "OB_COMMA");
        rules.push("OB_VALUE", "{BOOL}", eBoolean, "OB_COMMA");
        rules.push("OB_VALUE", "{NULL}", eNull, "OB_COMMA");
        rules.push("OB_VALUE", "[{]", eOpenOb, ">OBJECT:OB_COMMA");
        rules.push("OB_VALUE", "[[]", eOpenArr, ">ARRAY:OB_COMMA");

        rules.push("OB_COMMA", ",", rules.skip(), "NAME");

        rules.push("ARRAY,ARR_COMMA", "\\]", eCloseArr, "<");
        rules.push("ARRAY,ARR_VALUE", "{STRING}", eString, "ARR_COMMA");
        rules.push("ARRAY,ARR_VALUE", "{NUMBER}", eNumber, "ARR_COMMA");
        rules.push("ARRAY,ARR_VALUE", "{BOOL}", eBoolean, "ARR_COMMA");
        rules.push("ARRAY,ARR_VALUE", "{NULL}", eNull, "ARR_COMMA");
        rules.push("ARRAY,ARR_VALUE", "[{]", eOpenOb, ">OBJECT:ARR_COMMA");
        rules.push("ARRAY,ARR_VALUE", "[[]", eOpenArr, ">ARRAY:ARR_COMMA");

        rules.push("ARR_COMMA", ",", rules.skip(), "ARR_VALUE");

        rules.push("*", "[ \t\r\n]+", rules.skip(), ".");

        lexertl::basic_generator<lexertl::rules, usm>::build(rules, sm);
        // Read-ahead
        lexertl::lookup(sm, results);

        while (results.id != eEOF)
        {
            std::cout << "Id: " << results.id << " token: " <<
                std::string(utf_out_iter(results.start, results.end),
                utf_out_iter(results.end, results.end)) <<
                " state = " << results.state << '\n';
            lexertl::lookup(sm, results);
        }

        std::cout << "Stack has " << results.stack.size() << " values on it.\n";
    }
    catch (const std::exception &e)
    {
        std::cout << e.what () << std::endl;
    }

    return 0;
}

References

Acknowledgements

Thanks to Dave Handley for all his constructive criticism, performance testing, STL and general C++ tips.
Thanks to Régis Vaquette for all his testing and enthusiastic encouragement!
Thanks to Hartmut Kaiser for all his performance testing, feature requests and advice on compatibility with different compilers and, of course, boost.
Thanks to Hari Rangarajan for his bug reports and performance improvement suggestions.

Comments for lexertl

Submit a comment

Name:

Comment: