aboutsummaryrefslogtreecommitdiff
path: root/doc/user.xml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/user.xml')
-rw-r--r--doc/user.xml2568
1 files changed, 0 insertions, 2568 deletions
diff --git a/doc/user.xml b/doc/user.xml
deleted file mode 100644
index b9c8d51..0000000
--- a/doc/user.xml
+++ /dev/null
@@ -1,2568 +0,0 @@
-
- <sect1 id="runningtests">
- <title>Running Tests</title>
-
- <para>There are two ways to execute a testsuite. The most
- common way is when there is existing support in the
- <filename>Makefile</filename> of the tool being tested. This
- usually consists of a
- <emphasis>check</emphasis> target. The other way is to execute the
- <command>runtest</command> program directly. To run
- <command>runtest</command> directly from the command line requires
- either all of the correct command line options, or a
- <xref linkend="local"/> must be set up correctly.</para>
-
- <sect2 id="makecheck" xreflabel="Make Check">
- <title>Running 'make check'</title>
-
- <para>To run tests from an existing collection, first use
- <command>configure</command> as usual to set up the build
- directory. Then type:</para>
-
- <screen>
- make check
- </screen>
-
- <para>If the <emphasis>check</emphasis> target exists, it
- usually saves you some trouble. For instance, it can set up any
- auxiliary programs or other files needed by the tests. The most
- common file the <emphasis>check</emphasis> target depends on is
- the
- <filename>site.exp</filename> file. The site.exp file contains
- various variables that &dj; used to determine the configuration
- of the program being tested. This is mostly for supporting
- remote testing.</para>
-
- <para>The <emphasis>check</emphasis> target is supported by GNU
- <productname>Automake</productname>. To have &dj; support added to your
- generated <filename>Makefile.in</filename>, just add the keyword
- <command>dejagnu</command> to the AUTOMAKE_OPTIONS variable in
- your <filename>Makefile.am</filename> file.</para>
-
- <para>Once you have run <emphasis>make check</emphasis> to build
- any auxiliary files, you can invoke the test driver
- <command>runtest</command> directly to repeat the tests.
- You will also have to execute <command>runtest</command>
- directly for test collections with no
- <emphasis>check</emphasis> target in the
- <filename>Makefile</filename>.</para>
-
- </sect2>
-
- <sect2 id="runtest" xreflabel="Runtest">
- <title>Running runtest</title>
-
- <para><command>runtest</command> is the test driver for
- &dj;. You can specify two kinds of things on the
- <command>runtest</command> command line: command line options,
- and Tcl variables that are passed to the test scripts. The
- options are listed alphabetically below.</para>
-
- <para><command>runtest</command> returns an exit code of
- <emphasis>1</emphasis> if any test has an unexpected result. If
- all tests pass or fail as expected, <command>runtest</command>
- returns <emphasis>0</emphasis> as the exit code.</para>
-
- <sect3 id="outputs" xreflabel="Output States">
- <title>Output States</title>
-
- <para><filename>runtest</filename> flags the outcome of each
- test as one of these cases. See <xref linkend="posix"/> for a
- discussion of how POSIX specifies the meanings of these
- cases.</para>
-
- <variablelist>
- <varlistentry>
- <term>PASS</term>
- <listitem><para>The most desirable outcome: the test was
- expected to succeed and did succeed.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>XPASS</term>
- <listitem><para>A pleasant kind of failure: a test was expected to
- fail, but succeeded. This may indicate progress; inspect the test
- case to determine whether you should amend it to stop expecting
- failure.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>FAIL</term>
- <listitem><para>A test failed, although it was expected to succeed.
- This may indicate regress; inspect the test case and the failing
- software to locate the bug.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>XFAIL</term>
- <listitem><para>A test failed, but it was expected to fail. This
- result indicates no change in a known bug. If a test fails because
- the operating system where the test runs lacks some facility required
- by the test, the outcome is <emphasis>UNSUPPORTED</emphasis>
- instead.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>UNRESOLVED</term>
- <listitem><para>Output from a test requires manual inspection; the
- testsuite could not automatically determine the outcome. For
- example, your tests can report this outcome is when a test does not
- complete as expected.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>UNTESTED</term>
- <listitem><para>A test case is not yet complete, and in particular
- cannot yet produce a <emphasis>PASS</emphasis> or
- <emphasis>FAIL</emphasis>. You can also use this outcome in dummy
- ``tests'' that note explicitly the absence of a real test case for a
- particular property.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>UNSUPPORTED</term>
- <listitem><para>A test depends on a conditionally available feature
- that does not exist (in the configured testing environment). For
- example, you can use this outcome to report on a test case that does
- not work on a particular target because its operating system support
- does not include a required subroutine.</para></listitem>
- </varlistentry>
- </variablelist>
-
- <para><command>runtest</command> may also display the following
- messages:</para>
-
- <variablelist>
- <varlistentry>
- <term>ERROR</term>
- <listitem><para>Indicates a major problem (detected by the test case
- itself) in running the test. This is usually an unrecoverable error,
- such as a missing file or loss of communication to the target. (POSIX
- testsuites should not emit this message; use
- <emphasis>UNSUPPORTED</emphasis>, <emphasis>UNTESTED</emphasis>, or
- <emphasis>UNRESOLVED</emphasis> instead, as
- appropriate.)</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>WARNING</term>
- <listitem><para>Indicates a possible problem in running the
- test. Usually warnings correspond to recoverable errors, or display
- an important message about the following tests.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>NOTE</term>
- <listitem><para>An informational message about the test
- case.</para></listitem>
- </varlistentry>
- </variablelist>
-
- </sect3>
-
- <sect3 id="invoking" xreflabel="Invoking runtest">
- <title>Invoking runtest</title>
-
- <para>This is the full set of command line options that
- <command>runtest</command> recognizes. Option names may be
- abbreviated to the shortest unique string.</para>
-
- <variablelist>
- <varlistentry>
- <term><option>-a</option>, <option>--all</option></term>
- <listitem><para>Display all test output. By default,
- <emphasis>runtest</emphasis> shows only the output of tests that
- produce unexpected results; that is, tests with status
- <emphasis>FAIL</emphasis> (unexpected failure),
- <emphasis>XPASS</emphasis> (unexpected success), or
- <emphasis>ERROR</emphasis> (a severe error in the test case
- itself). Specify <option>--all</option> to see output for tests
- with status <emphasis>PASS</emphasis> (success, as expected)
- <emphasis>XFAIL</emphasis> (failure, as expected), or
- <emphasis>WARNING</emphasis> (minor error in the test case
- itself).</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--build [triplet]</option></term>
- <listitem><para><emphasis>string</emphasis> is a
- configuration triplet as used
- by <command>configure</command>. This is the type of machine
- &dj; and the tools to be tested are built on. For a normal
- cross this is the same as the host, but for a Canadian
- cross, they are separate.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--host [triplet]</option></term>
- <listitem><para><symbol>string</symbol> is a configuration
- triplet as used by <emphasis>configure</emphasis>. Use this
- option to override the default string recorded by your
- configuration's choice of host. This choice does not change
- how anything is actually configured unless --build is also
- specified; it affects <emphasis>only</emphasis> &dj;
- procedures that compare the host string with particular
- values. The procedures
- <emphasis>ishost</emphasis>, <emphasis>istarget</emphasis>,
- <emphasis>isnative</emphasis>, and <emphasis>setup_xfail</emphasis>
- are affected by <option>--host</option>. In this usage,
- <emphasis>host</emphasis> refers to the machine that the tests are to
- be run on, which may not be the same as the
- <emphasis>build</emphasis> machine. If <option>--build</option>
- is also specified, then <option>--host</option> refers to the
- machine that the tests will be run on, not the machine &dj; is run
- on.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--host_board [name]</option></term>
- <listitem><para>The host board to use.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--target [triplet]</option></term>
- <listitem><para>Use this option to override the default
- setting (running native tests). <emphasis>triplet</emphasis>
- is a configuration triplet of the form
- <emphasis>cpu-vendor-os</emphasis> as used by
- <command>configure</command>. This option changes the
- configuration <command>runtest</command> uses for the
- default tool names, and other setup
- information.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--debug</option></term>
- <listitem><para>Turns on
- the <productname>Expect</productname> internal debugging
- output. Debugging output is displayed as part of the
- <emphasis>runtest</emphasis> output, and logged to a file called
- <filename>dbg.log</filename>. The extra debugging output does
- <emphasis>not</emphasis> appear on standard output, unless the
- verbose level is greater than 2 (for instance, to see debug output
- immediately, specify <option>--debug -v -v</option>). The
- debugging output shows all attempts at matching the test output of
- the tool with the scripted patterns describing expected output. The
- output generated with <option>--strace</option> also goes into
- <filename>dbg.log</filename>.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--help</option></term>
- <listitem><para>Prints out a short summary of the
- <emphasis>runtest</emphasis> options, then exits (even if you also
- specify other options).</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--ignore [name(s)] </option></term>
- <listitem><para>The name(s) of specific tests to
- ignore.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--objdir [path]</option></term>
- <listitem><para>Use <emphasis>path</emphasis> as the top
- directory containing any auxiliary compiled test code. The
- default is '.'. Use this option to locate pre-compiled
- test code. You can normally prepare any auxiliary files
- needed with
- <emphasis>make</emphasis>.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--outdir [path]</option></term>
- <listitem><para>Write log files in directory
- <filename>path</filename>. The default is '.', the
- directory where you start <emphasis>runtest</emphasis>. This
- option affects only the summary (<filename>.sum</filename>)
- and the detailed log files (<filename>.log</filename>). The
- &dj; debug log <filename>dbg.log</filename> always appears
- (when requested) in the local directory.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--log_dialog</option></term>
- <listitem><para>Emit Expect output to stdout.
- The <productname>expect</productname> output is usually only
- written to
- <filename>tool.log</filename>. By enabling this option, they are also
- be printed to the stdout of the <emphasis>runtest</emphasis>
- invocation.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--reboot [name]</option></term>
- <listitem><para>Reboot the target board when
- <command>runtest</command> starts. When running tests on a
- separate target board, it is safer to reboot the target to
- be certain of its state. However, when developing test
- scripts, rebooting can take a lot of time.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--srcdir [path]</option></term>
- <listitem><para>Use <filename>path</filename> as the top directory
- for test scripts to run. <emphasis>runtest</emphasis> looks in this
- directory for any subdirectory whose name begins with the toolname
- (specified with <option>--tool</option>). For instance, with
- <option>--tool gdb</option>, <emphasis>runtest</emphasis> uses
- tests in subdirectories <filename>gdb.*</filename> (with the usual
- shell-like filename expansion). If you do not use
- <option>--srcdir</option>, <emphasis>runtest</emphasis> looks for
- test directories under the current working
- directory.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--strace [number]</option></term>
- <listitem><para>Turn on internal tracing for
- <emphasis>expect</emphasis>, to n levels deep. By adjusting the
- level, you can control the extent to which your output expands
- multi-level Tcl statements. This allows you to ignore some levels of
- <emphasis>case</emphasis> or <emphasis>if</emphasis> statements.
- Each procedure call or control structure counts as one ``level''. The
- output is recorded in the same file, <filename>dbg.log</filename>,
- used for output from <option>--debug</option>.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--target_board [name(s)]</option></term>
- <listitem><para>The list of target boards to run tests
- on.</para></listitem>
- </varlistentry>
-
- <varlistentry id="tool-opt">
- <term><option>--tool [name(s)]</option></term>
- <listitem><para>Specifies which testsuite to run, and what
- initialization module to use. <option>--tool</option> is used
- <emphasis>only</emphasis> for these two purposes. It is
- <emphasis>not</emphasis> used to name the executable program to
- test. Executable tool names (and paths) are recorded in
- <filename>site.exp</filename> and you can override them by specifying
- Tcl variables on the command line.</para>
-
- <para>For example, including "<option>--tool</option> gcc" on the
- <emphasis>runtest</emphasis> command line runs tests from all test
- subdirectories whose names match <filename>gcc.*</filename>, and uses
- one of the initialization modules named
- <filename>config/*-gcc.exp</filename>. To specify the name of the
- compiler (perhaps as an alternative path to what
- <emphasis>runtest</emphasis> would use by default), use
- <emphasis>GCC=binname</emphasis> on the <emphasis>runtest</emphasis>
- command line.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--tool_exec [name]</option></term>
- <listitem><para>The path to the tool executable to
- test.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--tool_opts [options]</option></term>
- <listitem><para>A list of additional options to pass to the
- tool.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>-v</option>, <option>--verbose</option></term>
- <listitem><para>Turns on more output. Repeating this option increases
- the amount of output displayed. Level one (<emphasis>-v</emphasis>)
- is simply test output. Level two (<emphasis>-v -v</emphasis>) shows
- messages on options, configuration, and process control. Verbose
- messages appear in the detailed (<filename>*.log</filename>) log
- file, but not in the summary (<filename>*.sum</filename>) log
- file.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>-V</option>, <option>--version</option></term>
- <listitem><para>Prints out the version numbers of &dj;,
- Expect, and Tcl.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><option>--D0</option>, <option>--D1</option></term>
- <listitem><para>Start the internal Tcl debugger. The Tcl debugger
- supports breakpoints, single stepping, and other common debugging
- activities. See the document "Debugger for Tcl Applications" by Don
- Libes. (Distributed in PostScript form with
- <emphasis>expect</emphasis> as the file
- <filename>expect/tcl-debug.ps.</filename>. If you specify
- <emphasis>-D1</emphasis>, the <emphasis>expect</emphasis> shell stops
- at a breakpoint as soon as &dj; invokes it. If you specify
- <emphasis>-D0</emphasis>, &dj; starts as usual, but you can enter
- the debugger by sending an interrupt (e.g. by typing
- <keycombo><keycap>Control</keycap><keycap>c</keycap></keycombo>).
- </para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><filename>testfile</filename>.exp[=arg(s)]</term>
- <listitem><para>Specify the names of testsuites to run. By default,
- <emphasis>runtest</emphasis> runs all tests for the tool, but you can
- restrict it to particular testsuites by giving the names of the
- <emphasis>.exp expect</emphasis> scripts that control
- them. <emphasis>testsuite</emphasis>.exp may not include path
- information; use plain filenames.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><filename>testfile</filename>.exp="testfile1 ..."</term>
- <listitem><para>Specify a subset of tests in a suite to run. For
- compiler or assembler tests, which often use a single
- <emphasis>.exp</emphasis> script covering many different source
- files, this option allows you to further restrict the tests by
- listing particular source files to compile. Some tools even support
- wildcards here. The wildcards supported depend upon the tool, but
- typically they are <emphasis>?</emphasis>, <emphasis>*</emphasis>,
- and <emphasis>[chars]</emphasis>.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term><symbol>tclvar</symbol>=value</term>
- <listitem><para>You can define Tcl variables for use by your test
- scripts in the same style used with <emphasis>make</emphasis> for
- environment variables. For example, <emphasis>runtest
- GDB=gdb.old</emphasis> defines a variable called
- <command>GDB</command>; when your scripts refer to
- <symbol>$GDB</symbol> in this run, they use the value
- <emphasis>gdb.old</emphasis>.</para>
-
- <para>The default Tcl variables used for most tools are defined in
- the main &dj; <emphasis>Makefile</emphasis>; their values are
- captured in the <filename>site.exp</filename> file.</para></listitem>
- </varlistentry>
- </variablelist>
- </sect3>
-
- <sect3 id="common" xreflabel="Common Operations">
- <title>Common Options</title>
-
- <para>Typically, you don't need to use any command line
- options. The <option>--tool</option> option is only required
- when there is more than one testsuite in the same
- directory. The default options are in the
- local <filename>site.exp</filename> file, created
- by <command>make site.exp</command>.</para>
-
- <para>For example, if the directory <filename>gdb/testsuite</filename>
- contains a collection of &dj; tests for GDB, you can run them like
- this:</para>
-
- <screen>
- $ cd gdb/testsuite
- $ runtest --tool gdb
- </screen>
-
- <para>The test output follows, then ends with:</para>
-
- <screen>
- === gdb Summary ===
-
- # of expected passes 508
- # of expected failures 103
- /usr/latest/bin/gdb version 4.14.4 -nx
- </screen>
-
- <para>You can use the option <option>--srcdir</option> to point to
- some other directory containing a collection of tests:</para>
-
- <screen>
- $ runtest --srcdir /devo/gdb/testsuite
- </screen>
-
- <para>By default, <command>runtest</command> prints only the
- names of the tests it runs, output from any tests that have unexpected
- results, and a summary showing how many tests passed and how many
- failed. To display output from all tests (whether or not they behave
- as expected), use the <option>--all</option> option. For more
- verbose output about processes being run, communication, and so on, use
- <option>--verbose</option>. To see even more output, use multiple
- <option>--verbose</option> options.
- The <option>--help</option> for a more detailed explanation of
- each <command>runtest</command> option.</para>
-
- </sect3>
- </sect2>
-
- <sect2 id="outputfiles" xreflabel="Output Files">
- <title>Output files</title>
-
- <para>&dj; always writes two kinds of output files. Summary
- output is written to the <filename>.sum</filename> file, and
- detailed output is written to the <filename>.log</filename> file.
- The tool name determines the prefix for these files. For example,
- after running with
- <option>--tool gdb</option>, the output files will be called
- <filename>gdb.sum</filename> and
- <filename>gdb.log</filename>. For troubleshooting, a debug log
- file that logs the operation
- of <productname>Expect</productname> is available. Each of
- these will be described in turn.</para>
-
- <sect3 id="sum" xreflabel="Summary log file">
- <title>Summary log file</title>
-
- <para>&dj; always produces a summary (<filename>.sum</filename>)
- output file. This summary lists the names of all test files run.
- For each test file, one line of output from
- each <command>pass</command> command (showing status
- <emphasis>PASS</emphasis> or <emphasis>XPASS</emphasis>) or
- <command>fail</command> command (status
- <emphasis>FAIL</emphasis> or <emphasis>XFAIL</emphasis>),
- trailing summary statistics that count passing and failing tests
- (expected and unexpected), the full pathname of the tool tested,
- and the version number of the tool. All possible outcomes, and
- all errors, are always reflected in the summary output file,
- regardless of whether or not you specify
- <option>--all</option>.</para>
-
- <para>If any of your tests use the procedures
- <command>unresolved</command>, <command>unsupported</command>,
- or <command>untested</command>, the summary output also
- tabulates the corresponding outcomes.</para>
-
- <para>For example, after running <command>runtest --tool
- binutils</command> a summary log file will be written to
- <filename>binutils.sum</filename>. Normally, &dj; writes this
- file in your current working directory. Use the
- <option>--outdir</option> option to select a different output
- directory.</para>
-
- <example>
- <title>Sample summary log</title>
-
- <screen>
- Test Run By bje on Sat Nov 14 21:04:30 AEDT 2015
-
- === gdb tests ===
-
- Running ./gdb.t00/echo.exp ...
- PASS: Echo test
- Running ./gdb.all/help.exp ...
- PASS: help add-symbol-file
- PASS: help aliases
- PASS: help breakpoint "bre" abbreviation
- FAIL: help run "r" abbreviation
- Running ./gdb.t10/crossload.exp ...
- PASS: m68k-elf (elf-big) explicit format; loaded
- XFAIL: mips-ecoff (ecoff-bigmips) "ptype v_signed_char" signed C types
-
- === gdb Summary ===
-
- # of expected passes 5
- # of expected failures 1
- # of unexpected failures 1
- /usr/latest/bin/gdb version 4.6.5 -q
- </screen>
- </example>
-
- </sect3>
-
- <sect3 id="log" xreflabel="Detailed log file">
- <title>Detailed log file</title>
-
- <para>&dj; also saves a detailed log file
- (<filename>.log</filename>), showing any output generated by
- test cases as well as the summary output. For example, after
- running
- <command>runtest --tool binutils</command>, a detailed log file
- will be written to <filename>binutils.log</filename>. Normally,
- &dj; writes this file in your current working directory. Use the
- <option>--outdir</option> option to select a different output
- directory.</para>
-
- <example>
- <title>Sample detailed log for <productname>g++</productname> tests</title>
-
- <screen>
- Test Run By bje on Sat Nov 14 21:07:23 AEDT 2015
-
- === g++ tests ===
-
- Running ./g++.other/t01-1.exp ...
- PASS: operate delete
-
- Running ./g++.other/t01-2.exp ...
- FAIL: i960 bug EOF
- p0000646.C: In function `int warn_return_1 ()':
- p0000646.C:109: warning: control reaches end of non-void function
- p0000646.C: In function `int warn_return_arg (int)':
- p0000646.C:117: warning: control reaches end of non-void function
- p0000646.C: In function `int warn_return_sum (int, int)':
- p0000646.C:125: warning: control reaches end of non-void function
- p0000646.C: In function `struct foo warn_return_foo ()':
- p0000646.C:132: warning: control reaches end of non-void function
- Running ./g++.other/t01-4.exp ...
- FAIL: abort
- 900403_04.C:8: zero width for bit-field `foo'
- Running ./g++.other/t01-3.exp ...
- FAIL: segment violation
- 900519_12.C:9: parse error before `;'
- 900519_12.C:12: Segmentation violation
- /usr/latest/bin/gcc: Internal compiler error: program cc1plus got fatal signal
-
- === g++ Summary ===
-
- # of expected passes 1
- # of expected failures 3
- /usr/latest/bin/g++ version cygnus-2.0.1
- </screen>
- </example>
-
- </sect3>
-
- <sect3 id="debugfile" xreflabel="Debug log file">
- <title>Debug log file</title>
-
- <para>The <command>runtest</command>
- option <option>--debug</option> creates a file showing the
- output from
- <productname>Expect</productname> in debugging mode. The
- <filename>dbg.log</filename> file is created in the directory
- where you start <command>runtest</command>. The log file shows
- the string sent to the tool under test by
- each <command>send</command> command and the pattern it compares
- with the tool output by each <command>expect</command>
- command.</para>
-
- <para>The log messages begin with a message of the form:
-
- <screen>
- expect: does {<symbol>tool output</symbol>} (spawn_id <symbol>n</symbol>)
- match pattern {<emphasis>expected pattern</emphasis>}?
- </screen>
- </para>
-
- <para>For every unsuccessful match,
- <productname>Expect</productname> issues a
- <emphasis>no</emphasis> after this message. If other patterns
- are specified for the same <productname>Expect</productname>
- command, they are reflected also, but without the first part of
- the message (<emphasis>expect... match
- pattern</emphasis>).</para>
-
- <para>When <productname>Expect</productname> finds a match, the
- log for the successful match ends with <emphasis>yes</emphasis>,
- followed by a record of the <productname>Expect</productname>
- variables set to describe a successful match.</para>
-
- <example>
- <title>Debug log excerpt for a
- <productname>GDB</productname> test:</title>
-
- <screen>
- send: sent {break gdbme.c:34\n} to spawn id 6
- expect: does {} (spawn_id 6) match pattern {Breakpoint.*at.* file
- gdbme.c, line 34.*\(gdb\) $}? no
- {.*\(gdb\) $}? no
- expect: does {} (spawn_id 0) match pattern {return} ? no
- {\(y or n\) }? no
- {buffer_full}? no
- {virtual}? no
- {memory}? no
- {exhausted}? no
- {Undefined}? no
- {command}? no
- break gdbme.c:34
- Breakpoint 8 at 0x23d8: file gdbme.c, line 34.
- (gdb) expect: does {break gdbme.c:34\r\nBreakpoint 8 at 0x23d8:
- file gdbme.c, line 34.\r\n(gdb) } (spawn_id 6) match pattern
- {Breakpoint.*at.* file gdbme.c, line 34.*\(gdb\) $}? yes
- expect: set expect_out(0,start) {18}
- expect: set expect_out(0,end) {71}
- expect: set expect_out(0,string) {Breakpoint 8 at 0x23d8: file
- gdbme.c, line 34.\r\n(gdb) }
- epect: set expect_out(spawn_id) {6}
- expect: set expect_out(buffer) {break gdbme.c:34\r\nBreakpoint 8
- at 0x23d8: file gdbme.c, line 34.\r\n(gdb) }
- PASS: 70 0 breakpoint line number in file
- </screen>
- </example>
-
- <para>This example exhibits three properties of
- <productname>Expect</productname> and
- <productname>&dj;</productname> that might be surprising at
- first glance:</para>
-
- <itemizedlist mark="bullet">
- <listitem><para>Empty output for the first attempted match. The
- first set of attempted matches shown ran against the output
- <emphasis>{}</emphasis> --- that is, no
- output. <productname>Expect</productname> begins
- attempting to match the patterns supplied immediately; often,
- the first pass is against incomplete output (or completely
- before all output, as in this case).</para></listitem>
-
- <listitem><para>Interspersed tool output. The beginning of
- the log entry for the second attempted match may be hard to
- spot: this is because the prompt <emphasis>{(gdb) }</emphasis>
- appears on the same line, just before the
- <emphasis>expect:</emphasis> that marks the beginning of the
- log entry.</para></listitem>
-
- <listitem><para>Fail-safe patterns. Many of the patterns
- tested are fail-safe patterns provided by
- <productname>GDB</productname> testing utilities, to reduce
- possible indeterminacy. It is useful to anticipate potential
- variations caused by extreme system conditions
- (<productname>GDB</productname> might issue the message
- <emphasis>virtual memory exhausted</emphasis> in rare
- circumstances), or by changes in the tested program
- (<emphasis>Undefined command</emphasis> is the likeliest
- outcome if the name of a tested command changes).</para>
-
- <para>The pattern <emphasis>{return}</emphasis> is a
- particularly interesting fail-safe to notice; it checks for an
- unexpected <keycap>RET</keycap> prompt. This may happen,
- for example, if the tested tool can filter output through a
- pager.</para>
-
- <para>These fail-safe patterns (like the debugging log itself)
- are primarily useful while developing test scripts. Use the
- <command>error</command> procedure to make the actions for
- fail-safe patterns produce messages starting with
- <emphasis>ERROR</emphasis> on standard output, and in the
- detailed log file.</para></listitem>
- </itemizedlist>
- </sect3>
- </sect2>
- </sect1>
-
- <sect1 id="Customizing" xreflabel="Customizing DejaGnu">
- <title>Customizing &dj;</title>
-
- <para>The site configuration file, <filename>site.exp</filename>,
- captures configuration-dependent values and propagates them to the
- &dj; test environment using Tcl variables. This ties the
- &dj; test scripts into the <command>configure</command> and
- <command>make</command> programs. If this file is setup correctly,
- it is possible to execute a testsuite merely by typing
- <command>runtest</command>.</para>
-
- <para>&dj; supports two <filename>site.exp</filename>
- files. The multiple instances of <filename>site.exp</filename> are
- loaded in a fixed order. The first file loaded
- is the local file <filename>site.exp</filename>, and then the
- optional global <filename>site.exp</filename> file as
- pointed to by the <symbol>DEJAGNU</symbol> environment
- variable.</para>
-
- <para>There is an optional <emphasis>master</emphasis>
- <filename>site.exp</filename>, capturing configuration values that
- apply to &dj; across the board, in each configuration-specific
- subdirectory of the &dj; library directory.
- <command>runtest</command> loads these values first. The master
- <filename>site.exp</filename> contains the default values for all
- targets and hosts supported by &dj;. This master file is
- identified by setting the environment variable
- <symbol>DEJAGNU</symbol> to the name of the file. This is also
- referred to as the ``global'' config file.</para>
-
- <para>Any directory containing a configured testsuite also has a
- local <filename>site.exp</filename>, capturing configuration values
- specific to the tool under test. Since <command>runtest</command>
- loads these values last, the individual test configuration can
- either rely on and use, or override, any of the global values from
- the global <filename>site.exp</filename> file.</para>
-
- <para>You can usually generate or update the testsuite's local
- <filename>site.exp</filename> by typing <command>make
- site.exp</command> in the testsuite directory, after the test
- suite is configured.</para>
-
- <para>You can also have a file in your home directory called
- <filename>.dejagnurc</filename>. This gets loaded after the other
- config files. Usually this is used for personal stuff, like
- setting the <symbol>all_flag</symbol> so all the output gets
- printed, or your own verbosity levels. This file is usually
- restricted to setting command line options.</para>
-
- <para>You can further override the default values in a
- user-editable section of any <filename>site.exp</filename>, or by
- setting variables on the <command>runtest</command> command
- line.</para>
-
- <sect2 id="local" xreflabel="Local Config File">
- <title>Local Config File</title>
-
- <para>It is usually more convenient to keep these <emphasis>manual
- overrides</emphasis> in the <filename>site.exp</filename>
- local to each test directory, rather than in the global
- <filename>site.exp</filename> in the installed &dj;
- library. This file is mostly for supplying tool specific info
- that is required by the testsuite.</para>
-
- <para>All local <filename>site.exp</filename> files have
- two sections, separated by comment text. The first section is
- the part that is generated by <command>make</command>. It is
- essentially a collection of Tcl variable definitions based on
- <filename>Makefile</filename> environment variables. Since they
- are generated by <command>make</command>, they contain the
- values as specified by <command>configure</command>. (You can
- also customize these values by using the <option>--site</option>
- option to <command>configure</command>.) In particular, this
- section contains the <filename>Makefile</filename>
- variables for host and target configuration data. Do not edit
- this first section; if you do, your changes are replaced next
- time you run <command>make</command>.</para>
-
- <example>
- <title>The first section starts with</title>
-
- <programlisting>
- ## these variables are automatically generated by make ##
- # Do not edit here. If you wish to override these values
- # add them to the last section
- </programlisting>
- </example>
-
- <para>In the second section, you can override any default values
- (locally to &dj;) for all the variables. The second section
- can also contain your preferred defaults for all the command
- line options to <command>runtest</command>. This allows you to
- easily customize <command>runtest</command> for your preferences
- in each configured test-suite tree, so that you need not type
- options repeatedly on the command line. (The second section may
- also be empty, if you do not wish to override any defaults.)</para>
-
- <example>
- <title>The first section ends with this line</title>
-
- <programlisting>
- ## All variables above are generated by configure. Do Not Edit ##
- </programlisting>
- </example>
-
- <para>You can make any changes under this line. If you wish to
- redefine a variable in the top section, then just put a
- duplicate value in this second section. Usually the values
- defined in this config file are related to the configuration of
- the test run. This is the ideal place to set the variables
- <symbol>host_triplet</symbol>, <symbol>build_triplet</symbol>,
- <symbol>target_triplet</symbol>. All other variables are tool
- dependent, i.e., for testing a compiler, the value for
- <symbol>CC</symbol> might be set to a freshly built binary, as
- opposed to one in the user's path.</para>
-
- <para>Here's an example local site.exp file, as used for
- <productname>GCC/G++</productname> testing.</para>
-
- <example>
- <title>Local Config File</title>
-
- <programlisting>
- ## these variables are automatically generated by make ##
- # Do not edit here. If you wish to override these values
- # add them to the last section
- set rootme "/build/devo-builds/i586-pc-linux-gnulibc1/gcc"
- set host_triplet i586-pc-linux-gnulibc1
- set build_triplet i586-pc-linux-gnulibc1
- set target_triplet i586-pc-linux-gnulibc1
- set target_alias i586-pc-linux-gnulibc1
- set CFLAGS ""
- set CXXFLAGS "-isystem /build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libio -isystem $srcdir/../libg++/src -isystem $srcdir/../libio -isystem $srcdir/../libstdc++ -isystem $srcdir/../libstdc++/stl -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libg++ -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../libstdc++"
- append LDFLAGS " -L/build/devo-builds/i586-pc-linux-gnulibc1/gcc/../ld"
- set tmpdir /build/devo-builds/i586-pc-linux-gnulibc1/gcc/testsuite
- set srcdir "${srcdir}/testsuite"
- ## All variables above are generated by configure. Do Not Edit ##
-
- </programlisting>
- </example>
-
- <para>This file defines the required fields for a local config
- file, namely the three config triplets, and the srcdir. It also
- defines several other Tcl variables that are used exclusively by
- the GCC testsuite. For most test cases, the CXXFLAGS and LDFLAGS
- are supplied by &dj; itself for cross testing, but to test a
- compiler, GCC needs to manipulate these itself.</para>
-
- <para>The local <filename>site.exp</filename> may also set Tcl
- variables such as <symbol>test_timeout</symbol> which can control
- the amount of time (in seconds) to wait for a remote test to
- complete. If not specified, <symbol>test_timeout</symbol> defaults
- to 300 seconds.</para>
-
- </sect2>
- <sect2 id="global" xreflabel="Global Config File">
- <title>Global Config File</title>
-
- <para>The master config file is where all the target specific
- config variables for a whole site get set. The idea is
- that for a centralized testing lab where people have to share a
- target between multiple developers. There are settings for both
- remote targets and remote hosts. Here's an example of a Master
- Config File (also called the Global config file) for a
- <emphasis>Canadian cross</emphasis>. A Canadian cross is when
- you build and test a cross compiler on a machine other than the
- one it's to be hosted on.</para>
-
- <para>Here we have the config settings for our California
- office. Note that all config values are site dependent. Here we
- have two sets of values that we use for testing m68k-aout cross
- compilers. As both of these target boards has a different
- debugging protocol, we test on both of them in sequence.</para>
-
- <example>
- <title>Global Config file</title>
-
- <programlisting>
-
- # Make sure we look in the right place for the board description files.
- if ![info exists boards_dir] {
- set boards_dir {}
- }
- lappend boards_dir "/nfs/cygint/s1/cygnus/dejagnu/boards"
-
- verbose "Global Config File: target_triplet is $target_triplet" 2
- global target_list
-
- case "$target_triplet" in {
- { "native" } {
- set target_list "unix"
- }
- { "sparc64-*elf" } {
- set target_list "sparc64-sim"
- }
- { "mips-*elf" } {
- set target_list "mips-sim wilma barney"
- }
- { "mips-lsi-elf" } {
- set target_list "mips-lsi-sim{,soft-float,el}"
- }
- { "sh-*hms" } {
- set target_list { "sh-hms-sim" "bloozy" }
- }
- }
- </programlisting>
- </example>
-
- <para>In this case, we have support for several cross compilers,
- that all run on this host. For testing on operating systems that
- don't support Expect, &dj; can be run on the local build
- machine, and it can connect to the remote host and run all the
- tests for this cross compiler on that host. All the remote OS
- requires is a working Telnet server.</para>
-
- <para>As you can see, all one does is set the variable
- <symbol>target_list</symbol> to the list of targets and options to
- test. The simple settings, like for
- <emphasis>sparc64-elf</emphasis> only require setting the name of
- the single board config file. The <emphasis>mips-elf</emphasis>
- target is more complicated. Here it sets the list to three target
- boards. One is the default mips target, and both
- <emphasis>wilma</emphasis> <emphasis>barney</emphasis> are
- symbolic names for other mips boards. Symbolic names are covered
- in the <xref linkend="addboard"/> chapter. The more complicated
- example is the one for <emphasis>mips-lsi-elf</emphasis>. This one
- runs the tests with multiple iterations using all possible
- combinations of the <option>--soft-float</option> and the
- <option>--el</option> (little endian) option. Needless to say,
- this last feature is mostly compiler specific.</para>
-
- </sect2>
-
- <sect2 id="boardconfig" xreflabel="Board Config File">
- <title>Board Configuration File</title>
-
- <para>The board config file is where board specific config data
- is stored. A board config file contains all the higher-level
- configuration settings. There is a rough inheritance scheme, where it is
- possible to base a new board description file on an existing one. There
- are also collections of custom procedures for common environments. For
- more information on adding a new board config file, go to the <xref
- linkend="addboard"/> chapter. </para>
-
- <para>An example board config file for a GNU simulator is as
- follows. <function>set_board_info</function> is a procedure that sets the
- field name to the specified value. The procedures in square brackets
- <emphasis>[]</emphasis> are <emphasis>helper procedures</emphasis>. These
- are used to find parts of a tool chain required to build an executable
- image that may reside in various locations. This is mostly of use for
- when the startup code, the standard C libraries, or the tool chain itself
- is part of your build tree.</para>
-
- <example>
- <title>Board Configuration File</title>
-
- <programlisting>
- # This is a list of toolchains that are supported on this board.
- set_board_info target_install {sparc64-elf}
-
- # Load the generic configuration for this board. This will define any
- # routines needed by the tool to communicate with the board.
- load_generic_config "sim"
-
- # We need this for find_gcc and *_include_flags/*_link_flags.
- load_base_board_description "basic-sim"
-
- # Use long64 by default.
- process_multilib_options "long64"
-
- setup_sim sparc64
-
- # We only support newlib on this target. We assume that all multilib
- # options have been specified before we get here.
- set_board_info compiler "[find_gcc]"
- set_board_info cflags "[libgloss_include_flags] [newlib_include_flags]"
- set_board_info ldflags "[libgloss_link_flags] [newlib_link_flags]"
- # No linker script.
- set_board_info ldscript "";
-
- # Used by a few gcc.c-torture testcases to delimit how large the
- # stack can be.
- set_board_info gcc,stack_size 16384
- # The simulator doesn't return exit statuses and we need to indicate this
- # the standard GCC wrapper will work with this target.
- set_board_info needs_status_wrapper 1
- # We can't pass arguments to programs.
- set_board_info noargs 1
- </programlisting>
- </example>
-
- <para>There are five helper procedures used in this example. The first
- one, <function>find gcc</function> looks for a copy of the GNU compiler in
- your build tree, or it uses the one in your path. This will also return
- the proper transformed name for a cross compiler if you whole build tree
- is configured for one. The next helper procedures are
- <function>libgloss_include_flags</function> &amp;
- <function>libgloss_link_flags</function>. These return the proper flags to
- compiler and link an executable image using <xref
- linkend="libgloss"/>, the GNU BSP (Board Support Package). The final
- procedures are <function>newlib_include_flag</function> &amp;
- <function>newlib_include_flag</function>. These find the Newlib C
- library, which is a reentrant standard C library for embedded systems
- comprising of non GPL'd code.</para>
-
- </sect2>
-
- <sect2 id="releng" xreflabel="Remote Host Testing">
- <title>Remote Host Testing</title>
-
- <note><para>Thanks to DJ Delorie for the original paper that
- this section is based on.</para></note>
-
- <para>&dj; also supports running the tests on a remote
- host. To set this up, the remote host needs an FTP server, and a
- telnet server. Currently foreign operating systems used as
- remote hosts are VxWorks, VRTX, DOS/Windows 3.1, MacOS and Windows.</para>
-
- <para>The recommended source for a Windows-based FTP
- server is to get IIS (either IIS 1 or Personal Web Server) from
- <ulink
- url="http://www.microsoft.com">http://www.microsoft.com</ulink>.
- When you install it, make sure you install the FTP server - it's
- not selected by default. Go into the IIS manager and change the
- FTP server so that it does not allow anonymous FTP. Set the home
- directory to the root directory (i.e. c:\) of a suitable
- drive. Allow writing via FTP.</para>
-
- <para>It will create an account like IUSR_FOOBAR where foobar is
- the name of your machine. Go into the user editor and give that
- account a password that you don't mind hanging around in the
- clear (i.e. not the same as your admin or personal
- passwords). Also, add it to all the various permission groups.</para>
-
- <para>You'll also need a telnet server. For Windows, go
- to the <ulink url="http://ataman.com">Ataman</ulink> web site,
- pick up the Ataman Remote Logon Services for Windows, and
- install it. You can get started on the eval period anyway. Add
- IUSR_FOOBAR to the list of allowed users, set the HOME directory
- to be the same as the FTP default directory. Change the Mode
- prompt to simple.</para>
-
- <para>Ok, now you need to pick a directory name to do all the
- testing in. For the sake of this example, we'll call it piggy
- (i.e. c:\piggy). Create this directory.</para>
-
- <para>You'll need a unix machine. Create a directory for the
- scripts you'll need. For this example, we'll use
- /usr/local/swamp/testing. You'll need to have a source tree
- somewhere, say /usr/src/devo. Now, copy some files from
- releng's area in SV to your machine:</para>
-
- <example>
- <title>Remote host setup</title>
-
- <screen>
- cd /usr/local/swamp/testing
- mkdir boards
- scp darkstar.welcomehome.org:/dejagnu/cst/bin/MkTestDir .
- scp darkstar.welcomehome.org:/dejagnu/site.exp .
- scp darkstar.welcomehome.org:/dejagnu/boards/useless98r2.exp boards/foobar.exp
- export DEJAGNU=/usr/local/swamp/testing/site.exp
-
- </screen>
- </example>
-
- <para>You must edit the boards/foobar.exp file to reflect your
- machine; change the hostname (foobar.com), username
- (iusr_foobar), password, and ftp_directory (c:/piggy) to match
- what you selected.</para>
-
- <para>Edit the global <filename> site.exp</filename> to reflect your
- boards directory:</para>
-
- <example>
- <title>Add The Board Directory</title>
-
- <programlisting>
- lappend boards_dir "/usr/local/swamp/testing/boards"
- </programlisting>
- </example>
-
- <para>Now run MkTestDir, which is in the contrib
- directory. The first parameter is the toolchain prefix, the
- second is the location of your devo tree. If you are testing a
- cross compiler (ex: you have sh-hms-gcc.exe in your PATH on
- the PC), do something like this:</para>
-
- <example>
- <title>Setup Cross Remote Testing</title>
-
- <programlisting>
- ./MkTestDir sh-hms /usr/dejagnu/src/devo
- </programlisting>
- </example>
-
- <para>If you are testing a native PC compiler (ex: you have
- gcc.exe in your PATH on the PC), do this:</para>
-
- <example>
- <title>Setup Native Remote Testing</title>
-
- <programlisting>
- ./MkTestDir '' /usr/dejagnu/src/devo
- </programlisting>
- </example>
-
- <para>To test the setup, <command>ftp</command> to your PC
- using the username (iusr_foobar) and password you selected. CD
- to the test directory. Upload a file to the PC. Now telnet to
- your PC using the same username and password. CD to the test
- directory. Make sure the file is there. Type "set" and/or "gcc
- -v" (or sh-hms-gcc -v) and make sure the default PATH contains
- the installation you want to test.</para>
-
- <example>
- <title>Run Test Remotely</title>
-
- <programlisting>
- cd /usr/local/swamp/testing
- make -k -w check RUNTESTFLAGS="--host_board foobar --target_board foobar -v -v" > check.out 2>&amp;1
- </programlisting>
- </example>
-
- <para>To run a specific test, use a command like this (for
- this example, you'd run this from the gcc directory that
- MkTestDir created):</para>
-
- <example>
- <title>Run a Test Remotely</title>
-
- <programlisting>
- make check RUNTESTFLAGS="--host_board sloth --target_board sloth -v compile.exp=921202-1.c"
- </programlisting>
- </example>
-
- <para>Note: if you are testing a cross-compiler, put in the
- correct target board. You'll also have to download more .exp
- files and modify them for your local configuration. The -v's
- are optional.</para>
-
- </sect2>
-
- <sect2 id="configfile" xreflabel="Config File Values">
- <title>Config File Values</title>
-
- <para>&dj; uses a named array in Tcl to hold all the info for
- each machine. In the case of a Canadian cross, this means host
- information as well as target information. The named array is
- called <symbol>target_info</symbol>, and it has two indices. The
- following fields are part of the array.</para>
-
- <sect3 id="optiondefs" xreflabel="Option Variables">
- <title>Command Line Option Variables</title>
-
- <para>In the user editable second section of the <xref
- linkend="personal"/> you can not only override the configuration
- variables captured in the first section, but also specify
- default values for all on the <command>runtest</command>
- command line options. Save for <option>--debug</option>,
- <option>--help</option>, and <option>--version</option>, each
- command line option has an associated Tcl variable. Use the
- Tcl <command>set</command> command to specify a new default
- value (as for the configuration variables). The following
- table describes the correspondence between command line
- options and variables you can set in
- <filename>site.exp</filename>. <xref linkend="invoking"/>, for
- explanations of the command-line options.</para>
-
- <table frame="all" rowsep="0" colsep="0">
- <title>Tcl Variables For Command Line Options</title>
-
- <tgroup cols="3" align="char" rowsep="1" colsep="0">
- <thead><row>
- <entry>runtest option</entry>
- <entry>Tcl variable</entry>
- <entry>description</entry>
- </row></thead>
- <tbody>
-
- <row>
- <entry>--all</entry>
- <entry>all_flag</entry>
- <entry>display all test results if set</entry>
- </row>
-
- <row>
- <entry>--baud</entry>
- <entry>baud</entry>
- <entry>set the default baud rate to something other than
- 9600.</entry>
- </row>
-
- <row>
- <entry>--connect</entry>
- <entry>connectmode</entry>
- <entry><command>rlogin</command>,
- <command>telnet</command>, <command>rsh</command>,
- <command>kermit</command>, <command>tip</command>, or
- <command>mondfe</command></entry>
- </row>
-
- <row>
- <entry>--outdir</entry>
- <entry>outdir</entry>
- <entry>directory for <filename>tool.sum</filename> and
- <filename>tool.log.</filename></entry>
- </row>
-
- <row>
- <entry>--objdir</entry>
- <entry>objdir</entry>
- <entry>directory for pre-compiled binaries</entry>
- </row>
-
- <row>
- <entry>--reboot</entry>
- <entry>reboot</entry>
- <entry>reboot the target if set to
- <emphasis>"1"</emphasis>; do not reboot if set to
- <emphasis>"0"</emphasis> (the default).</entry>
- </row>
-
- <row>
- <entry>--srcdir</entry>
- <entry>srcdir</entry>
- <entry>directory of test subdirectories</entry>
- </row>
-
- <row>
- <entry>--strace</entry>
- <entry>tracelevel</entry>
- <entry>a number: Tcl trace depth</entry>
- </row>
-
- <row>
- <entry>--tool</entry>
- <entry>tool</entry>
- <entry>name of tool to test; identifies init, test subdir</entry>
- </row>
-
- <row>
- <entry>--verbose</entry>
- <entry>verbose</entry>
- <entry>verbosity level. As option, use multiple times; as
- variable, set a number, 0 or greater.</entry>
- </row>
-
- <row>
- <entry>--target</entry>
- <entry>target_triplet</entry>
- <entry>The canonical configuration string for the target.</entry>
- </row>
-
- <row>
- <entry>--host</entry>
- <entry>host_triplet</entry>
- <entry>The canonical configuration string for the host.</entry>
- </row>
-
- <row>
- <entry>--build</entry>
- <entry>build_triplet</entry>
- <entry>The canonical configuration string for the build
- host.</entry>
- </row>
-
- <row>
- <entry>--mail</entry>
- <entry>address</entry>
- <entry>Email the output log to the specified address.</entry>
- </row>
-
- </tbody>
- </tgroup>
- </table>
-
- </sect3>
-
- <sect3 id="personal" xreflabel="Personal Config File">
- <title>Personal Config File</title>
-
- <para>The personal config file is used to customize
- <command>runtest's</command> behaviour for each person. It is
- typically used to set the user preferred setting for verbosity,
- and any experimental Tcl procedures. My personal
- <filename>~/.dejagnurc</filename> file looks like:</para>
-
- <example>
- <title>Personal Config File</title>
-
- <programlisting>
- set all_flag 1
- set RLOGIN /usr/ucb/rlogin
- set RSH /usr/local/sbin/ssh
- </programlisting>
- </example>
-
- <para>Here I set <symbol>all_flag</symbol> so I see all the test
- cases that PASS along with the ones that FAIL. I also set
- <symbol>RLOGIN</symbol> to the BSD version. I have
- <productname>Kerberos</productname> installed, and when I rlogin
- to a target board, it usually isn't supported. So I use the non
- secure version rather than the default that's in my path. I also
- set <symbol>RSH</symbol> to the <productname>SSH</productname>
- secure shell, as rsh is mostly used to test unix
- machines within a local network here.</para>
-
- </sect3>
- </sect2>
-
- </sect1>
-
- <sect1 id="Extending" xreflabel="Extending DejaGnu">
- <title>Extending &dj;</title>
-
- <sect2 id="addsuite" xreflabel="Adding a new testsuite">
- <title>Adding a new testsuite</title>
-
- <para>The testsuite for a new tool should always be located in that tools
- source directory. &dj; require the directory be named
- <filename>testsuite</filename>. Under this directory, the test
- cases go in a subdirectory whose name begins with the tool
- name. For example, for a tool named <emphasis>myprog</emphasis>,
- each subdirectory containing testsuites must start
- with <emphasis>"myprog."</emphasis>.</para>
-
- </sect2>
-
- <sect2 id="addtool" xreflabel="Adding A New Tool">
- <title>Adding a new tool</title>
-
- <para>In general, the best way to learn how to write code, or
- even prose, is to read something similar. This principle
- applies to test cases and to testsuites. Unfortunately,
- well-established testsuites have a way of developing their own
- conventions: as test writers become more experienced with &dj;
- and with Tcl, they accumulate more utilities, and take advantage
- of more and more features of
- <productname>Expect</productname>
- and <productname>Tcl</productname> in general. Inspecting such
- established testsuites may make the prospect of creating an
- entirely new testsuite appear overwhelming. Nevertheless, it is
- straightforward to start a new testsuite.</para>
-
- <para>To help orient you further in this task, here is an outline of the
- steps to begin building a testsuite for a program example.</para>
-
- <itemizedlist mark="bullet">
-
- <listitem><para>Create or select a directory to contain your new
- collection of tests. Change into that directory (shown here as
- <filename>testsuite</filename>):</para>
-
- <para>Create a <filename>configure.in</filename> file in this directory,
- to control configuration-dependent choices for your tests. So far as
- &dj; is concerned, the important thing is to set a value for the
- variable <symbol>target_abbrev</symbol>; this value is the link to the
- init file you will write soon. (For simplicity, we assume the
- environment is Unix, and use <emphasis>unix</emphasis> as the
- value.)</para>
-
- <para>What else is needed in <filename>configure.in</filename> depends on
- the requirements of your tool, your intended test environments, and which
- configure system you use. This example is a minimal configure.in for use
- with <productname>GNU Autoconf</productname>. </para></listitem>
-
- <listitem><para>Create <filename>Makefile.in</filename> (if using
- Autoconf), or <filename>Makefile.am</filename> (if using
- Automake), the source file used by configure to build your
- <filename>Makefile</filename>. If you are using GNU Automake.just add the
- keyword <emphasis>dejagnu</emphasis> to the
- <emphasis>AUTOMAKE_OPTIONS</emphasis> variable in your
- <filename>Makefile.am</filename> file. This will add all
- the <filename>Makefile</filename> support needed to run &dj;,
- and support the <xref linkend="makecheck"/> target.</para>
-
- <para>You also need to include two targets important to &dj;:
- <emphasis>check</emphasis>, to run the tests, and
- <emphasis>site.exp</emphasis>, to set up the Tcl copies of
- configuration-dependent values. This is called the
- <xref linkend="local"/> The <emphasis>check</emphasis> target
- must invoke the <command>runtest</command> program to run the
- tests.</para>
-
- <para>The <emphasis>site.exp</emphasis> target should usually
- set up (among other things) the <emphasis>$tool</emphasis>
- variable for the name of your program. If the
- local <filename>site.exp</filename> file is setup correctly, it
- is possible to execute the tests by merely
- typing <command>runtest</command> on the command line.</para>
-
- <example>
- <title>Sample Makefile.in Fragment</title>
-
- <programlisting>
- # Look for a local version of &dj;, otherwise use one in the path
- RUNTEST = `if test -f $(top_srcdir)/../dejagnu/runtest; then \
- echo $(top_srcdir) ../dejagnu/runtest; \
- else \
- echo runtest; \
- fi`
-
- # Flags to pass to runtest
- RUNTESTFLAGS =
-
- # Execute the tests
- check: site.exp all
- $(RUNTEST) $(RUNTESTFLAGS) \
- --tool <symbol>${example}</symbol> --srcdir $(srcdir)
-
- # Make the local config file
- site.exp: ./config.status Makefile
- @echo "Making a new config file..."
- -@rm -f ./tmp?
- @touch site.exp
-
- -@mv site.exp site.bak
- @echo "## these variables are automatically\
- generated by make ##" > ./tmp0
- @echo "# Do not edit here. If you wish to\
- override these values" >> ./tmp0
- @echo "# add them to the last section" >> ./tmp0
- @echo "set host_os ${host_os}" >> ./tmp0
- @echo "set host_alias ${host_alias}" >> ./tmp0
- @echo "set host_cpu ${host_cpu}" >> ./tmp0
- @echo "set host_vendor ${host_vendor}" >> ./tmp0
- @echo "set target_os ${target_os}" >> ./tmp0
- @echo "set target_alias ${target_alias}" >> ./tmp0
- @echo "set target_cpu ${target_cpu}" >> ./tmp0
- @echo "set target_vendor ${target_vendor}" >> ./tmp0
- @echo "set host_triplet ${host_canonical}" >> ./tmp0
- @echo "set target_triplet ${target_canonical}">>./tmp0
- @echo "set tool binutils" >> ./tmp0
- @echo "set srcdir ${srcdir}" >> ./tmp0
- @echo "set objdir `pwd`" >> ./tmp0
- @echo "set <symbol>${examplename}</symbol> <symbol>${example}</symbol>" >> ./tmp0
- @echo "## All variables above are generated by\
- configure. Do Not Edit ##" >> ./tmp0
- @cat ./tmp0 > site.exp
- @sed &lt; site.bak \
- -e '1,/^## All variables above are.*##/ d' \
- >> site.exp
- -@rm -f ./tmp?
-
- </programlisting>
- </example>
- </listitem>
-
- <listitem><para>Create a directory (in <filename>testsuite</filename>)
- called <filename>config</filename>. Make a <emphasis>Tool Init
- File</emphasis> in this directory. Its name must start with the
- <symbol>target_abbrev</symbol> value, or be named
- <filename>default.exp</filename> so call it
- <filename>config/unix.exp</filename> for our Unix based
- example. This is the file that contains the target-dependent
- procedures. Fortunately, on a native Unix system, most of
- them do not have to do very much in order
- for <command>runtest</command> to run. If the program being
- tested is not interactive, you can get away with this
- minimal <filename>unix.exp</filename> to begin with:</para>
-
- <example>
- <title>Simple tool init file for batch programs</title>
-
- <programlisting>
- proc myprog_exit {} {}
- proc myprog_version {} {}
- </programlisting>
- </example>
-
- <para>If the program being tested is interactive, however, you might
- as well define a <emphasis>start</emphasis> routine and invoke it by
- using a tool init file like this:</para>
-
- <example>
- <title>Simple tool init file for interactive programs</title>
-
- <programlisting>
- proc myprog_exit {} {}
- proc myprog_version {} {}
-
- proc myprog_start {} {
- global ${examplename}
- spawn ${examplename}
- expect {
- -re "" {}
- }
- }
-
- # Start the program running we want to test
- myprog_start
- </programlisting>
- </example>
- </listitem>
-
- <listitem><para>Create a directory whose name begins with your tool's
- name, to contain tests. For example, if your tool's name is
- <emphasis>myprog</emphasis>, then the directories all need to start with
- <emphasis>"myprog."</emphasis>.</para></listitem>
-
- <listitem><para>Create a sample test file. Its name must end with
- <filename>.exp</filename>. You can use
- <filename>first-try.exp</filename>. To begin with, just write there a
- line of Tcl code to issue a message.</para>
-
- <example>
- <title>Testing A New Tool Config</title>
-
- <programlisting>
-
- send_user "Testing: one, two...\n"
-
- </programlisting>
- </example>
- </listitem>
-
- <listitem><para>Back in the <filename>testsuite</filename> (top
- level) directory, run <command>configure</command>. Typically you do
- this while in the build directory. You may have to specify more of a
- path, if a suitable configure is not available in your execution
- path.</para></listitem>
-
- <listitem><para>You are now ready to type <command>make
- check</command> or <command>runtest</command>. You should
- see something like this:</para>
-
- <example>
- <title>Example Test Case Run</title>
-
- <screen>
- Test Run By bje on Sat Nov 14 15:08:54 AEDT 2015
-
- === example tests ===
-
- Running ./example.0/first-try.exp ...
- Testing: one, two...
-
- === example Summary ===
-
- </screen>
- </example>
-
- <para>There is no output in the summary, because so far the
- example does not call any of the procedures that report a
- test outcome.</para></listitem>
-
- <listitem><para>Write some real tests. For an interactive tool, you
- should probably write a real exit routine in fairly short order. In
- any case, you should also write a real version routine
- soon. </para></listitem>
-
- </itemizedlist>
-
- </sect2>
-
- <sect2 id="addtarget" xreflabel="Adding A New Target">
- <title>Adding A New Target</title>
-
- <para>&dj; has some additional requirements for target support, beyond
- the general-purpose provisions of configure. &dj; must actively
- communicate with the target, rather than simply generating or managing
- code for the target architecture. Therefore, each tool requires an
- initialization module for each target. For new targets, you must supply
- a few Tcl procedures to adapt &dj; to the target. This permits
- &dj; itself to remain target independent.</para>
-
- <para>Usually the best way to write a new initialization module is to
- edit an existing initialization module; some trial and error will be
- required. If necessary, you can use the <option>--debug</option> option to see what
- is really going on.</para>
-
- <para>When you code an initialization module, be generous in
- printing information controlled by
- the <function>verbose</function> procedure. In
- cross-development environments, most of the work is in getting
- the communications right. Code for communicating via TCP/IP
- networks or serial lines is available in a &dj; library files
- such as <filename>lib/telnet.exp</filename>.</para>
-
- <para>If you suspect a communication problem, try running the connection
- interactively from <productname>Expect</productname>. (There are three
- ways of running <productname>Expect</productname> as an interactive
- interpreter. You can run <productname>Expect</productname> with no
- arguments, and control it completely interactively; or you can use
- <command>expect -i</command> together with other command-line options and
- arguments; or you can run the command <command>interpreter</command> from
- any <productname>Expect</productname> procedure. Use
- <command>return</command> to get back to the calling procedure (if any),
- or <command>return -tcl</command> to make the calling procedure itself
- return to its caller; use <command>exit</command> or end-of-file to leave
- Expect altogether.) Run the program whose name is recorded in
- <symbol>$connectmode</symbol>, with the arguments in
- <symbol>$targetname</symbol>, to establish a connection. You should at
- least be able to get a prompt from any target that is physically
- connected.</para>
-
- </sect2>
-
- <sect2 id="addboard" xreflabel="Adding a new board">
- <title>Adding a new board</title>
-
- <para>Adding a new board consists of creating a new board
- configuration file. Examples are in
- <filename>dejagnu/baseboards</filename>. Usually to make a new
- board file, it's easiest to copy an existing one. It is also
- possible to have your file be based on a
- <emphasis>baseboard</emphasis> file with only one or two
- changes needed. Typically, this can be as simple as just
- changing the linker script. Once the new baseboard file is done,
- add it to the <symbol>boards_DATA</symbol> list in the
- <filename>dejagnu/baseboards/Makefile.am</filename>, and regenerate the
- Makefile.in using automake. Then just rebuild and install &dj;. You
- can test it by:</para>
-
- <para>There is a crude inheritance scheme going on with board files, so
- you can include one board file into another, The two main procedures used
- to do this are <function>load_generic_config</function> and
- <function>load_base_board_description</function>. The generic config file
- contains other procedures used for a certain class of target. The
- board description file is where the board specific settings go. Commonly
- there are similar target environments with just different
- processors.</para>
-
- <example>
- <title>Testing a New Board Configuration File</title>
-
- <screen>
- make check RUNTESTFLAGS="--target_board=<emphasis>newboardfile</emphasis>".
- </screen>
- </example>
-
- <para>Here's an example of a board config file. There are
- several <emphasis>helper procedures</emphasis> used in this
- example. A helper procedure is one that look for a tool of files
- in commonly installed locations. These are mostly used when
- testing in the build tree, because the executables to be tested
- are in the same tree as the new dejagnu files. The helper
- procedures are the ones in square braces
- <emphasis>[]</emphasis>, which is the Tcl execution characters.</para>
-
- <example>
- <title>Example Board Configuration File</title>
-
- <programlisting>
-
- # Load the generic configuration for this board. This will define a basic
- # set of routines needed by the tool to communicate with the board.
- load_generic_config "sim"
-
- # basic-sim.exp is a basic description for the standard Cygnus simulator.
- load_base_board_description "basic-sim"
-
- # The compiler used to build for this board. This has *nothing* to do
- # with what compiler is tested if we're testing gcc.
- set_board_info compiler "[find_gcc]"
-
- # We only support newlib on this target.
- # However, we include libgloss so we can find the linker scripts.
- set_board_info cflags "[newlib_include_flags] [libgloss_include_flags]"
- set_board_info ldflags "[newlib_link_flags]"
-
- # No linker script for this board.
- set_board_info ldscript "-Tsim.ld"
-
- # The simulator doesn't return exit statuses and we need to indicate this.
- set_board_info needs_status_wrapper 1
-
- # Can't pass arguments to this target.
- set_board_info noargs 1
-
- # No signals.
- set_board_info gdb,nosignals 1
-
- # And it can't call functions.
- set_board_info gdb,cannot_call_functions 1
-
- </programlisting>
- </example>
-
- </sect2>
-
- <sect2 id="boarddefs" xreflabel="Board File Values">
- <title>Board Configuration File Values</title>
-
- <para>These fields are all in the <symbol>board_info</symbol> array.
- These are all set by using the <function>set_board_info</function>
- and <function>add_board_info</function> procedures as required. The
- parameters are the field name, followed by the value that the field
- is set to or is added to the field, respectively.</para>
-
- <table frame="all" rowsep="0" colsep="0">
- <title>Common Board Info Fields</title>
-
- <tgroup cols="3" align="char" rowsep="1" colsep="0">
- <thead><row>
- <entry>Field</entry>
- <entry>Sample Value</entry>
- <entry>Description</entry>
- </row></thead>
- <tbody>
-
- <row>
- <entry>compiler</entry>
- <entry>"[find_gcc]"</entry>
- <entry>The path to the compiler to use.</entry>
- </row>
-
- <row>
- <entry>cflags</entry>
- <entry>"-mca"</entry>
- <entry>Compilation flags for the compiler.</entry>
- </row>
-
- <row>
- <entry>ldflags</entry>
- <entry>"[libgloss_link_flags] [newlib_link_flags]"</entry>
- <entry>Linking flags for the compiler.</entry>
- </row>
-
- <row>
- <entry>ldscript</entry>
- <entry>"-Wl,-Tidt.ld"</entry>
- <entry>The linker script to use when cross compiling.</entry>
- </row>
-
- <row>
- <entry>libs</entry>
- <entry>"-lgcc"</entry>
- <entry>Any additional libraries to link in.</entry>
- </row>
-
- <row>
- <entry>shell_prompt</entry>
- <entry>"cygmon>"</entry>
- <entry>The command prompt of the remote shell.</entry>
- </row>
-
- <row>
- <entry>hex_startaddr</entry>
- <entry>"0xa0020000"</entry>
- <entry>The Starting address as a string.</entry>
- </row>
-
- <row>
- <entry>start_addr</entry>
- <entry>0xa0008000</entry>
- <entry>The starting address as a value.</entry>
- </row>
-
- <row>
- <entry>startaddr</entry>
- <entry>"a0020000"</entry>
- <entry></entry>
- </row>
-
- <row>
- <entry>exit_statuses_bad</entry>
- <entry>1</entry>
- <entry>Whether there is an accurate exit status.</entry>
- </row>
-
- <row>
- <entry>reboot_delay</entry>
- <entry>10</entry>
- <entry>The delay between power off and power on.</entry>
- </row>
-
- <row>
- <entry>unreliable</entry>
- <entry>1</entry>
- <entry>Whether communication with the board is unreliable.</entry>
- </row>
-
- <row>
- <entry>sim</entry>
- <entry>[find_sim]</entry>
- <entry>The path to the simulator to use.</entry>
- </row>
-
- <row>
- <entry>objcopy</entry>
- <entry>$tempfil</entry>
- <entry>The path to the <command>objcopy</command> program.</entry>
- </row>
-
- <row>
- <entry>support_libs</entry>
- <entry>"${prefix_dir}/i386-coff/"</entry>
- <entry>Support libraries needed for cross compiling.</entry>
- </row>
-
- <row>
- <entry>addl_link_flags</entry>
- <entry>"-N"</entry>
- <entry>Additional link flags, rarely used.</entry>
- </row>
-
- </tbody>
- </tgroup>
- </table>
-
- <para>These fields are used by the GCC and GDB tests, and are mostly
- only useful to somewhat trying to debug a new board file for one of
- these tools. Many of these are used only by a few testcases, and their
- purpose is esoteric. These are listed with sample values as a guide to
- better guessing if you need to change any of these.</para>
-
- <table frame="all" rowsep="0" colsep="0">
- <title>Board Info Fields For GCC &amp; GDB</title>
-
- <tgroup cols="3" align="char" rowsep="1" colsep="0">
- <thead><row>
- <entry>Field</entry>
- <entry>Sample Value</entry>
- <entry>Description</entry>
- </row></thead>
- <tbody>
-
- <row>
- <entry>strip</entry>
- <entry>$tempfile</entry>
- <entry>Strip the executable of symbols.</entry>
- </row>
-
- <row>
- <entry>gdb_load_offset</entry>
- <entry>"0x40050000"</entry>
- </row>
-
- <row>
- <entry>gdb_protocol</entry>
- <entry>"remote"</entry>
- <entry>The GDB debugging protocol to use.</entry>
- </row>
-
- <row>
- <entry>gdb_sect_offset</entry>
- <entry>"0x41000000";</entry>
- </row>
-
- <row>
- <entry>gdb_stub_ldscript</entry>
- <entry>"-Wl,-Teva-stub.ld"</entry>
- <entry>The linker script to use with a GDB stub.</entry>
- </row>
-
- <row>
- <entry>gdb,cannot_call_functions</entry>
- <entry>1</entry>
- <entry>Whether GDB can call functions on the target,</entry>
- </row>
-
- <row>
- <entry>gdb,noargs</entry>
- <entry>1</entry>
- <entry>Whether the target can take command line arguments.</entry>
- </row>
-
- <row>
- <entry>gdb,nosignals</entry>
- <entry>1</entry>
- <entry>Whether there are signals on the target.</entry>
- </row>
-
- <row>
- <entry>gdb,short_int</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>gdb,start_symbol</entry>
- <entry>"_start";</entry>
- <entry>The starting symbol in the executable.</entry>
- </row>
-
- <row>
- <entry>gdb,target_sim_options</entry>
- <entry>"-sparclite"</entry>
- <entry>Special options to pass to the simulator.</entry>
- </row>
-
- <row>
- <entry>gdb,timeout</entry>
- <entry>540</entry>
- <entry>Timeout value to use for remote communication.</entry>
- </row>
-
- <row>
- <entry>gdb_init_command</entry>
- <entry>"set mipsfpu none"</entry>
- <entry>A single command to send to GDB before the program being
- debugged is started.</entry>
- </row>
-
- <row>
- <entry>gdb_init_commands</entry>
- <entry>"print/x \$fsr = 0x0"</entry>
- <entry>Same as <emphasis>gdb_init_command</emphasis>, except
- that this is a list, more commands can be added.</entry>
- </row>
-
- <row>
- <entry>gdb_load_offset</entry>
- <entry>"0x12020000"</entry>
- </row>
-
- <row>
- <entry>gdb_opts</entry>
- <entry>"--command gdbinit"</entry>
- </row>
-
- <row>
- <entry>gdb_prompt</entry>
- <entry>"\\(gdb960\\)"</entry>
- <entry>The prompt GDB is using.</entry>
- </row>
-
- <row>
- <entry>gdb_run_command</entry>
- <entry>"jump start"</entry>
- </row>
-
- <row>
- <entry>gdb_stub_offset</entry>
- <entry>"0x12010000"</entry>
- </row>
-
- <row>
- <entry>use_gdb_stub</entry>
- <entry>1</entry>
- <entry>Whether to use a GDB stub.</entry>
- </row>
-
- <row>
- <entry>use_vma_offset</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>wrap_m68k_aout</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>gcc,no_label_values</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>gcc,no_trampolines</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>gcc,no_varargs</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>gcc,stack_size</entry>
- <entry>16384</entry>
- <entry>Stack size to use with some GCC testcases.</entry>
- </row>
-
- <row>
- <entry>ieee_multilib_flags</entry>
- <entry>"-mieee";</entry>
- </row>
-
- <row>
- <entry>is_simulator</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>needs_status_wrapper</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>no_double</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>no_long_long</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>noargs</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>nullstone,lib</entry>
- <entry>"mips-clock.c"</entry>
- </row>
-
- <row>
- <entry>nullstone,ticks_per_sec</entry>
- <entry>3782018</entry>
- </row>
-
- <row>
- <entry>sys_speed_value</entry>
- <entry>200</entry>
- </row>
-
- <row>
- <entry>target_install</entry>
- <entry>{sh-hms}</entry>
- </row>
-
- </tbody>
- </tgroup>
- </table>
-
- </sect2>
-
- <sect2 id="writing" xreflabel="Writing A Test Case">
- <title>Writing A Test Case</title>
-
- <para>The easiest way to prepare a new test case is to base it
- on an existing one for a similar situation. There are two major
- categories of tests: batch or interactive. Batch oriented tests
- are usually easier to write.</para>
-
- <para>The GCC tests are a good example of batch oriented tests.
- All GCC tests consist primarily of a call to a single common
- procedure, since all the tests either have no output, or only
- have a few warning messages when successfully compiled. Any
- non-warning output is a test failure. All the C code needed is
- kept in the test directory. The test driver, written in Tcl,
- need only get a listing of all the C files in the directory, and
- compile them all using a generic procedure. This procedure and a
- few others supporting for these tests are kept in the library
- module <filename>lib/c-torture.exp</filename> in the GCC test
- suite. Most tests of this kind use very few
- <productname>Expect</productname> features, and are coded almost
- purely in Tcl.</para>
-
- <para>Writing the complete suite of C tests, then, consisted of
- these steps:</para>
-
- <itemizedlist mark="bullet">
- <listitem><para>Copying all the C code into the test directory.
- These tests were based on the C-torture test created by Torbjorn
- Granlund (on behalf of the Free Software Foundation) for GCC
- development.</para></listitem>
-
- <listitem><para>Writing (and debugging) the generic Tcl procedures for
- compilation.</para></listitem>
-
- <listitem><para>Writing the simple test driver: its main task is to
- search the directory (using the Tcl procedure
- <emphasis>glob</emphasis> for filename expansion with wildcards)
- and call a Tcl procedure with each filename. It also checks for
- a few errors from the testing procedure.</para></listitem>
- </itemizedlist>
-
- <para>Testing interactive programs is intrinsically more
- complex. Tests for most interactive programs require some trial
- and error before they are complete.</para>
-
- <para>However, some interactive programs can be tested in a
- simple fashion reminiscent of batch tests. For example, prior
- to the creation of &dj;, the GDB distribution already
- included a wide-ranging testing procedure. This procedure was
- very robust, and had already undergone much more debugging and
- error checking than many recent &dj; test cases.
- Accordingly, the best approach was simply to encapsulate the
- existing GDB tests, for reporting purposes. Thereafter, new GDB
- tests built up a family of Tcl procedures specialized for GDB
- testing.</para>
-
- </sect2>
-
- <sect2 id="debugging" xreflabel="Debugging A Test Case">
- <title>Debugging A Test Case</title>
-
- <para>These are the kinds of debugging information available
- from &dj;:</para>
-
- <itemizedlist mark="bullet">
-
- <listitem><para>Output controlled by test scripts themselves,
- explicitly allowed for by the test author. This kind of
- debugging output appears in the detailed output recorded in the
- &dj; log file. To do the same for new tests, use the
- <command>verbose</command> procedure (which in turn uses the
- variable also called <emphasis>verbose</emphasis>) to control
- how much output to generate. This will make it easier for other
- people running the test to debug it if necessary. Whenever
- possible, if <emphasis>$verbose</emphasis> is
- <emphasis>0</emphasis>, there should be no output other than the
- output from <emphasis>pass</emphasis>,
- <emphasis>fail</emphasis>, <emphasis>error</emphasis>, and
- <emphasis>warning</emphasis>. Then, to whatever extent is
- appropriate for the particular test, allow successively higher
- values of <emphasis>$verbose</emphasis> to generate more
- information. Be kind to other programmers who use your tests:
- provide for a lot of debugging information.</para></listitem>
-
- <listitem><para>Output from the internal debugging functions of
- Tcl and <productname>Expect</productname>. There is a command
- line options for each; both forms of debugging output are
- recorded in the file <filename>dbg.log</filename> in the current
- directory.</para>
-
- <para>Use <option>--debug</option> for information from the
- expect level; it generates displays of the expect attempts to
- match the tool output with the patterns specified. This output
- can be very helpful while developing test scripts, since it
- shows precisely the characters received. Iterating between the
- latest attempt at a new test script and the corresponding
- <filename>dbg.log</filename> can allow you to create the final
- patterns by ``cut and paste''. This is sometimes the best way
- to write a test case.</para></listitem>
-
- <listitem><para>Use <option>--strace</option> to see more
- detail at the Tcl level; this shows how Tcl procedure
- definitions expand, as they execute. The associated number
- controls the depth of definitions expanded.</para></listitem>
-
- <listitem><para>Finally, if the value of
- <emphasis>verbose</emphasis> is 3 or greater, &dj; turns on
- the expect command <command>log_user</command>. This command
- prints all expect actions to the expect standard output, to the
- detailed log file, and (if <option>--debug</option> is on) to
- <filename>dbg.log</filename>.</para></listitem>
- </itemizedlist>
-
- </sect2>
-
- <sect2 id="adding" xreflabel="Adding a test case to a testsuite">
- <title>Adding a test case to a testsuite</title>
-
- <para>There are two slightly different ways to add a test
- case. One is to add the test case to an existing directory. The
- other is to create a new directory to hold your test. The
- existing test directories represent several styles of testing,
- all of which are slightly different; examine the directories for
- the tool of interest to see which (if any) is most suitable.</para>
-
- <para>Adding a GCC test can be very simple: just add the C code
- to any directory beginning with <filename>gcc</filename> and it
- runs on the next: </para>
- <programlisting>runtest --tool gcc</programlisting>
-
- <para>To add a test to GDB, first add any source code you will
- need to the test directory. Then you can either create a new
- expect file, or add your test to an existing one (any
- file with a <emphasis>.exp</emphasis> suffix). Creating a new
- .exp file is probably a better idea if the test is significantly
- different from existing tests. Adding it as a separate file also
- makes upgrading easier. If the C code has to be already compiled
- before the test will run, then you'll have to add it to the
- <filename>Makefile.in</filename> file for that test directory,
- then run <command>configure</command> and
- <command>make</command>.</para>
-
- <para>Adding a test by creating a new directory is very
- similar:</para>
-
- <itemizedlist mark="bullet">
-
- <listitem><para>Create the new directory. All subdirectory names
- begin with the name of the tool to test; e.g. G++ tests might be
- in a directory called <filename>g++.other</filename>. There can
- be multiple test directories that start with the same tool name
- (such as <emphasis>g++</emphasis>).</para></listitem>
-
- <listitem><para>Add the new directory name to the
- <symbol>configdirs</symbol> definition in the
- <filename>configure.in</filename> file for the testsuite
- directory. This way when <command>make</command> and
- <command>configure</command> next run, they include the new
- directory.</para></listitem>
-
- <listitem><para>Add the new test case to the directory, as
- above. </para></listitem>
-
- <listitem><para>To add support in the new directory for
- configure and make, you must also create a
- <filename>Makefile.in</filename> and a
- <filename>configure.in</filename>.</para></listitem>
- </itemizedlist>
-
- </sect2>
-
- <sect2 id="hints" xreflabel="Hints On Writing A Test Case">
- <title>Hints On Writing A Test Case</title>
-
- <para>It is safest to write patterns that match all the output
- generated by the tested program; this is called closure.
- If a pattern does not match the entire output, any output that
- remains will be examined by the next <command>expect</command>
- command. In this situation, the precise boundary that determines
- which <command>expect</command> command sees what is very
- sensitive to timing between the Expect task and the task running
- the tested tool. As a result, the test may sometimes appear to
- work, but is likely to have unpredictable results. (This problem
- is particularly likely for interactive tools, but can also
- affect batch tools---especially for tests that take a long time
- to finish.) The best way to ensure closure is to use the
- <option>-re</option> option for the <command>expect</command>
- command to write the pattern as a full regular expressions; then
- you can match the end of output using a <emphasis>$</emphasis>.
- It is also a good idea to write patterns that match all
- available output by using <emphasis>.*\</emphasis> after the
- text of interest; this will also match any intervening blank
- lines. Sometimes an alternative is to match end of line using
- <emphasis>\r</emphasis> or <emphasis>\n</emphasis>, but this is
- usually too dependent on terminal settings.</para>
-
- <para>Always escape punctuation, such as <emphasis>(</emphasis>
- or <emphasis>&quot;</emphasis>, in your patterns; for example, write
- <emphasis>\(</emphasis>. If you forget to escape punctuation,
- you will usually see an error message like:</para>
- <programlisting>extra characters after close-quote</programlisting>
-
- <para>If you have trouble understanding why a pattern does not
- match the program output, try using the <option>--debug</option>
- option to <command>runtest</command>, and examine the debug log
- carefully.</para>
-
- <para>Be careful not to neglect output generated by setup rather
- than by the interesting parts of a test case. For example,
- while testing GDB, I issue a send <emphasis>set height
- 0\n</emphasis> command. The purpose is simply to make sure GDB
- never calls a paging program. The <emphasis>set
- height</emphasis> command in GDB does not generate any
- output; but running any command makes GDB issue a new
- <emphasis>(gdb) </emphasis> prompt. If there were no
- <command>expect</command> command to match this prompt, the
- output <emphasis>(gdb) </emphasis> begins the text seen by the
- next <command>expect</command> command---which might make that
- pattern fail to match.</para>
-
- <para>To preserve basic sanity, I also recommended that no test
- ever pass if there was any kind of problem in the test case. To
- take an extreme case, tests that pass even when the tool will
- not spawn are misleading. Ideally, a test in this sort of
- situation should not fail either. Instead, print an error
- message by calling one of the &dj; procedures
- <command>error</command> or <command>warning</command>.</para>
-
- </sect2>
-
- <sect2 id="tvariables" xreflabel="Test case variables">
- <title>Test case special variables</title>
-
- <para>There are special variables that contain other information
- from &dj;. Your test cases can inspect these variables, as well
- as the variables saved in
- <filename>site.exp</filename>. These variables should never be
- changed.</para>
-
- <variablelist>
- <varlistentry>
- <term>$prms_id</term>
- <listitem><para>The bug tracking system (eg. PRMS/GNATS)
- number identifying a corresponding bug report
- (<emphasis>0</emphasis> if you do not specify
- it).</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>$bug_id</term>
- <listitem><para>An optional bug ID, perhaps a bug
- identification number from another organization
- (<emphasis>0</emphasis> if you do not specify
- it).</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>$subdir</term>
- <listitem><para>The subdirectory for the current test
- case.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>$exec_output</term>
- <listitem><para>This is the output from a
- <function>${tool}_load</function> command. This only applies to
- tools like GCC and GAS which produce an object file that must in
- turn be executed to complete a test.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>$comp_output</term>
- <listitem><para>This is the output from a
- <function>${tool}_start</function> command. This is conventionally
- used for batch oriented programs, like GCC and GAS, that may
- produce interesting output (warnings, errors) without further
- interaction.</para></listitem>
- </varlistentry>
-
- <varlistentry>
- <term>$expect_out(buffer)</term>
- <listitem><para>The output from the last command. This is an
- internal variable set by Expect. More information can be found in
- the Expect manual.</para></listitem>
- </varlistentry>
-
- </variablelist>
-
- </sect2>
-
-</sect1>
-
- <sect1 id="unit" xreflabel="Unit Testing">
- <title>Unit Testing</title>
-
- <sect2 id="unittest" xreflabel="What Is Unit Testing?">
- <title>What Is Unit Testing?</title>
-
- <para>Most regression testing as done by &dj; is system testing:
- the complete application is tested all at once. Unit testing is
- for testing single files, or small libraries. In this case, each
- file is linked with a test case in C or C++, and each function
- or class and method is tested in series, with the test case
- having to check private data or global variables to see if the
- function or method worked.</para>
-
- <para>This works particularly well for testing APIs and at level
- where it is easier to debug them, than by needing to trace through
- the entire application. Also if there is a specification for the
- API to be tested, the testcase can also function as a compliance
- test.</para>
-
- </sect2>
-
- <sect2 id="djh" xreflabel="The dejagnu.h header file">
- <title>The dejagnu.h header file</title>
-
- <para>&dj; uses a single header
- file, <filename>dejagnu.h</filename> to assist in unit
- testing. As this file also produces its one test state output,
- it can be run stand-alone, which is very useful for testing on
- embedded systems. This header file has a C and C++ API for the
- test states, with simple totals, and standardized
- output. Because the output has been standardized, &dj; can be
- made to work with this test case, without writing almost any
- Tcl. The library module, dejagnu.exp, will look for the output
- messages, and then merge them into &dj;'s.</para>
-
- </sect2>
-
- <sect2 id="cunit" xreflabel="C Unit Testing API">
- <title>C Unit Testing API</title>
-
- <para>All of the functions that take a
- <parameter>msg</parameter> parameter use a C char * that is the
- message to be displayed. There currently is no support for
- variable length arguments.</para>
-
- <sect3 id="passfunc" xreflabel="pass function">
- <title>Pass Function</title>
-
- <para>This prints a message for a successful test
- completion.</para>
-
- <funcsynopsis role="C">
- <funcprototype>
- <funcdef><function>pass</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
-
- </sect3>
-
- <sect3 id="failfunc" xreflabel="fail function">
- <title>Fail Function</title>
-
- <para>This prints a message for an unsuccessful test
- completion.</para>
-
- <funcsynopsis role="C">
- <funcprototype>
- <funcdef><function>fail</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
-
- </sect3>
-
- <sect3 id="untestedfunc" xreflabel="untested function">
- <title>Untested Function</title>
-
- <para>This prints a message for an test case that isn't run
- for some technical reason.</para>
-
- <funcsynopsis role="C">
- <funcprototype>
- <funcdef><function>untested</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- <sect3 id="unresolvedfunc" xreflabel="unresolved function">
- <title>Unresolved Function</title>
-
- <para>This prints a message for an test case that is run,
- but there is no clear result. These output states require a
- human to look over the results to determine what happened.
- </para>
-
- <funcsynopsis role="C">
- <funcprototype>
- <funcdef><function>unresolved</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- <sect3 id="totalsfunc" xreflabel="totals function">
- <title>Totals Function</title>
-
- <para>This prints out the total numbers of all the test
- state outputs.</para>
-
- <funcsynopsis role="C">
- <funcprototype>
- <funcdef><function>totals</function></funcdef>
- <paramdef><parameter></parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- </sect2>
-
- <sect2 id="cppunit" xreflabel="C++ Unit Testing API">
- <title>C++ Unit Testing API</title>
-
- <para>All of the methods that take a
- <parameter>msg</parameter> parameter use a C char *
- or STL string, that is the message to be
- displayed. There currently is no support for variable
- length arguments.</para>
-
- <sect3 id="passmeth" xreflabel="pass method">
- <title>Pass Method</title>
-
- <para>This prints a message for a successful test
- completion.</para>
-
- <funcsynopsis role="C++">
- <funcprototype>
- <funcdef><function>TestState::pass</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- <sect3 id="failmeth" xreflabel="fail method">
- <title>Fail Method</title>
-
- <para>This prints a message for an unsuccessful test
- completion.</para>
-
- <funcsynopsis role="C++">
- <funcprototype>
- <funcdef><function>TestState::fail</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- <sect3 id="untestedmeth" xreflabel="untested method">
- <title>Untested Method</title>
-
- <para>This prints a message for an test case that isn't run
- for some technical reason.</para>
-
- <funcsynopsis role="C++">
- <funcprototype>
- <funcdef><function>TestState::untested</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- <sect3 id="unresolvedmeth" xreflabel="unresolved method">
- <title>Unresolved Method</title>
-
- <para>This prints a message for an test case that is run,
- but there is no clear result. These output states require a
- human to look over the results to determine what happened.
- </para>
-
- <funcsynopsis role="C++">
- <funcprototype>
- <funcdef><function>TestState::unresolved</function></funcdef>
- <paramdef><parameter>msg</parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
-
- <sect3 id="totalsmeth" xreflabel="totals method">
- <title>Totals Method</title>
-
- <para>This prints out the total numbers of all the test
- state outputs.</para>
-
- <funcsynopsis role="C++">
- <funcprototype>
- <funcdef><function>TestState::totals</function></funcdef>
- <paramdef><parameter></parameter></paramdef>
- </funcprototype>
- </funcsynopsis>
- </sect3>
- </sect2>
-</sect1>
-
-<!-- Keep this comment at the end of the file
-Local variables:
-mode: sgml
-sgml-omittag:t
-sgml-shorttag:t
-sgml-namecase-general:t
-sgml-general-insert-case:lower
-sgml-minimize-attributes:nil
-sgml-always-quote-attributes:t
-sgml-indent-step:1
-sgml-indent-data:nil
-sgml-parent-document:nil
-sgml-exposed-tags:nil
-sgml-local-catalogs:nil
-sgml-local-ecat-files:nil
-End:
--->