mirror of
https://github.com/westes/flex.git
synced 2026-01-27 18:04:36 +00:00
This file describes the flex test suite. * WHO SHOULD USE THE TEST SUITE? The test suite is intended to be used by flex developers, i.e., anyone hacking the flex distribution. If you are simply installing flex, then you can ignore this directory and its contents. * STRUCTURE OF THE TEST SUITE The testsuite consists of a large number of tests. In a "simple test", the check is simply that a scanner consumes all the tokens fed to it from a text (.txt) file in a maching grammar, not allowing stray characters to echo through to stdout. This is how we avoid meeding explicit check files. Each test is centered around a scanner known to work with the most recent version of flex. In general, after you modify your copy of the flex distribution, you should re-run the test suite. Some of the tests may require certain tools to be available (e.g., bison, diff). If any test returns an error or generates an error message, then your modifications *may* have broken a feature of flex. At a minimum, you'll want to investigate the failure and determine if it's truly significant. * HOW TO RUN THE TEST SUITE To build and execute all tests from the top level of the flex source tree: $ make check To build and execute a single test: $ cd tests/ # from the top level of the flex tree. $ make testname.log where "testname" is the name of the test. This is an automake-ism that will create (or re-create, if need be), a log of the particular test run that you're working on. * HOW TO ADD A NEW TEST TO THE TEST SUITE ** List your test in the TESTS variable in Makefile.am in this directory. Note that due to the large number of tests, we use variables to group similar tests together. This also helps with handling the automake test suite requirements. Hopefully your test can be listed in SIMPLE_TESTS. You'll need to add the appropriate automake _SOURCES variable as well, and .gitignore lines for the binary and generated code. If you're unsure, then consult the automake manual, paying attention to the parallel test harness section. ** On success, your test should return zero. ** On error, your test should return 1 (one) and print a message to stderr, which will have been redirected to the log file created by the automake test suite harness. ** If your test is skipped (e.g., because bison was not found), then the test should return 77 (seventy-seven). This is the exit status that would be recognized by automake's "test-driver" as _skipped_. ** Once your work is done, submit a patch via the flex development mailing list, the github pull request mechanism or some other suitable means. * NAMING CONVENTIONS A test with an _nr suffix exercises a non-reentrant scanner built with the default cpp back end. uA test with an _r suffix exercises a reentrant scanner built with the default cpp back end. A test with a c99 suffix exercises the c99 back end. All C99 scanners are re-entrant. A test with a _cpp suffix exercises the default cpp back end on a specification where the reentrant/non-reentrant distinction is not interesting. Most tests occur in groups with a common stem in the names, like alloc_extra_ or ccl_. These are exercising the same token grammar under different back ends. As new target languages are added these groups of patallel tests will grow. Tests that are not part of one of these series are usually of features supported on the default cpp back end only. WHY SOME TESTS ARE MISSING The "top" test is backend-independent; what it's really testing is Flex's ability to accumulate and ship preamble code sections. The C99 is missing tests for the Bison bridge, header generation, and loadable tables because it omits those features in order to be a simpler starting point for wring new back ends.