T2C Code Generator Reference

The T2C code generator is one of the main components of T2C Framework. It creates the C or C++ source code of TET-compliant tests based on the data presented in T2C files. It also generates necessary makefiles for individual tests and the test suite as a whole.

Part I. Using T2C Code Generator

1. Overview

The code generator creates the C or C++ source code of TET-compliant tests based on the data presented in T2C files. A T2C file contains the source code of the tests with special placeholders for the test parameters. This is, in fact, the test case code template. The sets of test parameters are also specified in the T2C file. The T2C code generator will create a C function for each test purpose by substituting the actual parameter values instead of these placeholders.

Test code generation

2. Building the code generator

Normally the code generator is built along with the other parts of T2C Framework (see "T2C: Getting Started tutorial" for instructions). We assume below that the T2C_ROOT environment variable contains absolute path to the main directory of T2C Framework.

The code generator is located in $T2C_ROOT/t2c/bin/ directory. To rebuild its executable and support libraries, execute $T2C_ROOT/t2c/build.sh

3. Directory structure of a test suite

To be able to use the code generator, we need to place T2C files, requirement catalogue files and other necessary files in a directory structure described below.

The root of this directory structure is assumed to be specified in the T2C_SUITE_ROOT environment variable.

The subdirectories of src (denoted below as <s1> and <s2>) are just used to group the T2C files. Often there is only one subdirectory in src that contains all the T2C files.

T2C_SUITE_ROOT
|   
+-- <main_suite_name>-t2c/  // the test suite
    |
    +-- <subsystem>-t2c/    // a "subsuite" containing tests for a given library (like glib, fontconfig, …)
        |
        +-- include/        // common include files for the tests for this library
        +-- reqs/           // requirement catalogue
        |   |
        |   +-- <group1>.xml
        |   +-- <group2>.xml
        |   +-- <group3>.xml
        |   …   …
        +-- src/            // T2C files
        |   |
        |   +-- <s1>/ // This directory contains a group of T2C files
        |   |   |
        |   |   +-- <group1>.t2c
        |   |   +-- <group2>.t2c
        |   |
        |   +-- <s2>/
        |   |   |
        |   …   …
        +-- scenarios/      // TET scenarios
        |   |
        |   +-- func_scen
        |
        +-- tests/          // Generated C-files are placed here
        |   |
        |   +-- <group1>/
        |   |   |
        |   |   +-- <group1>.c
        |   |   
        |   +-- <group2>/
        |   |   |
        |   |   +-- <group2>.c    
        |   …
        +-- testdata/       // Data used by the tests should be stored here
        |   |
        |   +-- testdata_src/   // Contains a makefile and source code 
        |   |                   // of the test data that needs to be built (modules, etc.)
        |   +-- <group1>/
        |   |   |
        |   |   …
        |   +-- <group2>/
        |   |   |
        |   |   …
        |   …
        …

4. Requirement catalogue

4.1. Overview

Based on the html with the requirements marked up, a requirement catalogue is created. This catalogue is used by the tests in failure reporting. It contains the ID and the text (original or reformulated, if specified) for each requirement. A file containing the default requirement catalogue for the tests from a T2C file has the same name as the latter but its extension is xml.

The catalogue is created by the ReqTools javascript in a web browser. See the "T2C: Getting Started" tutorial for the instructions on how to do this.

The currently used catalogue file format is rather simple. The catalogue file is an xml file that contains an ordinary header ("<?xml version="1.0"?>"), a root element ("requirements") and one or more "req" elements within the root element. Requirement id is specified in the "id"attribute of this tag, while its body is the requirement's text. (See an example below.)

<?xml version="1.0"?>
<requirements>
<req id="atk_streamable_content_get_mime_type.01">
Returns a gchar* representing the specified mime type.
</req>
<req id="app.atk_streamable_content_get_mime_type.02">
streamable: a GObject instance that implements AtkStreamableContentIface.
</req>
<req id="app.atk_streamable_content_get_mime_type.03">
i: a gint representing the position of the mime type starting from 0.
</req>
<req id="atk_streamable_content_get_n_mime_types.01">
Returns a gint which is the number of mime types supported by the object.
</req>
<req id="app.atk_streamable_content_get_n_mime_types.02">
streamable: a GObject instance that implements AtkStreamableContentIface.
</req>
<req id="atk_streamable_content_get_stream.01">
Returns a GIOChannel that contains the content of the specified mime type.
</req>
<req id="app.atk_streamable_content_get_stream.02">
streamable: a GObject instance that implements AtkStreamableContentIface.
</req>
<req id="app.atk_streamable_content_get_stream.03">
mime_type: a gchar* representing the mime type.
</req>
</requirements> 

By default "<<< REQ NOT FOUND >>>" is used as a requirement text for the requirements missing from the catalogues.

Sometimes the requirements for a group of interfaces are specified in several html files rather than one. In this case the complete requirement catalogue for this group should be assembled manually from the parts (xml files) generated by the ReqTools script for each of the html files with the documentation. (That is, these xml files should be merged into a single xml file.)

4.2. Using several requirement catalogues in a test

It is possible to use more than one requirement catalogue in a T2C file. It can be useful, for example, if there are some requirements that are checked in the tests for different functional groups of interfaces. These "common" requirements can be listed in a separate catalogue (xml file) and then used in any T2C file that needs them.

To specify additional requirement catalogues required for the tests in a T2C file, you can list their names in "#additional_req_catalogues" directive, separated by spaces (' '), commas (',') or semicolons (';').

Example:
#library    libMine-1.0
#libsection MyLibSection
#additional_req_catalogues AtkAction; AtkRole

Note that only names of the xml files containing requirement catalogues are specified.

If "#additional_req_catalogues" directive is missing, only the default requirement catalogue (the one with the same name as the T2C file) will be used.

By default, if neither the default nor the additional requirement catalogues exist, an error message will be displayed by the startup function and the test purposes will not be executed. This can be circumvented by defining (#define) the "T2C_IGNORE_RCAT_ERRORS" symbol in the code of the tests before compiling them.
To do this, you can specify
-D"T2C_IGNORE_RCAT_ERRORS"
in the COMPILER_FLAGS parameter in a .cfg file for this test suite (this parameter is described below).

5. Executing the code generator from the command line

5.1. Command line syntax

Synopsis:
    t2c <main_dir> <test_dir> [cfg_path]

The paths are relative to the directory specified in $T2C_SUITE_ROOT environment variable.

main_dir -
main directory for the test suites (<main_suite_name>-t2c directory in a structure shown above). The subdirectories of <main_dir> contain the test "subsuites" for the particular subsystems (libraries) to be tested.
It is often the same directory as specified in the T2C_SUITE_ROOT environment variable, so "." is often used as <main_dir>.

Main TET scenario file (tet_scen) resides in this directory. If this file does not exist when the code generator is invoked, it will be created. Otherwise an appropriate record will be added to the existing file.

tetexec.cfg should also reside in this directory. (This file is not generated automatically in this T2C vesion.)

test_dir -
path to the directory of a test suite to be generated. Example: sample-01-t2c.

The C-code generator looks for the t2c files in the src subdir of this directory. The generated C files will be placed in the tests subdir, local scenario file(s) - in scenarios. For the above example these directories are the following (assume $T2C_SUITE_ROOT is samples/): sample-01-t2c/src, sample-01-t2c/tests, sample-01-t2c/scenarios.

cfg_path -
(optional) path to the configuration file of the generator. If this path is not specified, default configuration will be used.

Example: (this command is executed from the samples directory)
    $T2C_ROOT/t2c/bin/t2c . sample-01-t2c sample-01-t2c/sample-01.cfg

5.2. Configuration parameters

The parameters can be specified in a configuration file of the code generator as follows:

<NAME>=<value>

Example:
COMPILER=lsbcc
COMPILER_FLAGS=`pkg-config --cflags glib-2.0` -DCHECK_EXT_REQS
LINKER_FLAGS=`pkg-config --libs glib-2.0`
TET_SCEN_RECORD=no
WAIT_TIME=180

The parameters can be listed in any order, one per line. Empty lines are ignored.


COMPILER -
the compiler to be used to build the generated tests. The value of this parameter (as well as COMPILER_FLAGS and LINKER_FLAGS) goes to the common test makefile, common.mk.

Default value: "gcc".


COMPILER_FLAGS and LINKER_FLAGS -
additional compiler and linker options required to build the tests. These flags will be copied to the makefiles of these tests.

Default value for both parameters: an empty string.

The contents of $T2C_SUITE_ROOT/cc_special_flags (if this file exists) will also be loaded as the additional compiler flags in the generated makefiles. The file usually contains the version-specific compiler flags (e.g., -fno-stack-protector for gcc 4.1.0 and newer.). This file is often absent or is empty. You should only use it when you really have to.


TET_SCEN_RECORD -
if the value of this parameter is "yes" (or "YES"), the generator will add a proper record in the main TET scenario file (tet_scen) after the code of the test is created. This record may look like this:

    :include:/sample-t2c/scenarios/func_scen

If this parameter has any other value (e.g. "no"), no record is added in tet_scen.

Default value: "yes".


WAIT_TIME -
if positive, specifies how long (in seconds) any single test purpose is allowed to run. If this amount of time expires, the test purpose is terminated and its result code is set to "TIME EXPIRED" (numeric value: 65).
If this parameter has zero or negative value, there will be no restrictions on test purpose execution time.

Default value: 30.

It is highly recommended that a reasonable positive value is specified for WAIT_TIME. This ensures that the test suite execution will be finished even if some of the tests hang.

WAIT_TIME is ignored in a standalone version of a test (as if it was set to 0). Otherwise if you debugged this test, it would be interrupted after the specified time had expired which is probably not what you want.


LANGUAGE -
if the value is "CPP" or "cpp", the code and makefiles are generated for a C++ compiler, otherwise for C. Generated C++ source files have "cpp" extension, while "c" extension is used for C source files.

Default value: "C".


SINGLE_PROCESS -
if "YES" or "yes", all tests are executed in the same process, otherwise - in separate process each. Ignored in standalone mode where all tests as well as the startup and cleanup code are always executed in the same process.

Default value: "no".


MAKEFILE_TEMPLATE -
Specifies the path to a custom makefile template for a given subsuite (relative to the subsuite directory, $T2C_SUITE_ROOT/<subsystem-t2c>/, see the directory structure).

The usage of custom makefile templates is described in the "Makefiles for the generated tests" section of this reference.

Example (from $T2C_SUITE_ROOT/mytests-t2c/mytests.cfg):
...
MAKEFILE_TEMPLATE=src/my_make.tmk
...

In this example $T2C_SUITE_ROOT/mytests-t2c/src/my_make.tmk is used as a template makefile for a given test subsuite.

Default value: an empty string.


6. Makefiles for the generated tests

T2C code generator creates a makefile for each test ("individual makefile") as well as a common makefile (common.mk) where some common build options are defined. By default, individual makefiles are generated from a template provided with the T2C Framework that can be found in $T2C_ROOT/t2c/src/templates/default.tmk ("tmk" extension means "template makefile"). It is not recommended to edit this file as it may change in future releases of the T2C Framework.

To create an individual makefile for a test, the code generator substitutes the special strings ("placeholders") in the template makefile with appropriate values. Currently there are only the following placeholder strings:

Sometimes it could be convenient to use some custom makefile template instead of the default one. This can be useful, for example, if you want to perform some pre- and post-build steps while building the tests as well as in many other situations.

T2C allows to specify custom makefile templates for a test subsuite and/or for individual tests. To set a makefile template for a subsuite as a whole, use MAKEFILE_TEMPLATE configuration parameter of the code generator. The template makefile may have any name in this case. To specify a makefile template for a particular test, give this makefile template the same name as the corresponding T2C file but use "tmk" extension instead of "t2c". Place this file to the same directory where the T2C file is.

Perhaps the easiest way to provide custom makefile templates for your tests is to copy the default template, adjust this copy as needed, give it appropriate name and instruct T2C to use this makefile template (as described above).

Note that the only mandatory make targets in the generated makefiles are: You may add new targets if necessary.

The makefile templates are processed by T2C code generator as follows. First it looks for a custom makefile template for the particular test (T2C file), i.e. the file with the same name and "tmk" extension. If there is such file, it is used as a makefile template. Otherwise, the code generator looks for a custom makefile for the test subsuite and if it is specified, it is used. Finally, if neither individual makefile template nor subsuite makefile template is specified, the default one (provided by T2C Framework) is used. This gives the test developer finer control of how the tests are built.

7. Building the test suite

Before building the tests, please ensure that the TET_ROOT environment variable contains the path where you have TET installed (usually /opt/lsb-tet3-lite) and T2C_ROOT is set properly.

To build the tests from the generated test sources you should change current directory to the output directory (<test_dir>/tests, see the synopsis above) and run make.

make will be invoked automatically for each generated source file.

For now, building and cleaning the tests by the means of TET is not supported. It is recommended to do this manually (running make or make clean, respectively, from the <test_dir>/tests).

8. Running the tests under TET

8.1. Running the tests

The T2C_SUITE_ROOT environment variable should be defined properly before one tries to run the tests (see above).

To run the tests, do the following. Change current directory to $T2C_SUITE_ROOT. Then set the TET_SUITE_ROOT environment variable to ".":

    export TET_SUITE_ROOT=.
Run the TET's test case controller (the dot is essential here):
    tcc -e .

The test cases specified in tet_scen file will be executed.

8.2. Turning trace message output to stderr on and off

While the tests are executed, all trace messages (see the description of TRACE and REQ macros) are only sent to the TET journal by default. If you need to output them also to the stderr stream, define the "VERBOSE" TET variable and set its value to "yes" when calling tcc.

This can be done via tcc command line options:
    tcc -e -v"VERBOSE=yes" .
Alternatively, you can define this variable in tetexec.cfg:
VERBOSE=yes

No whitespace characters are allowed before and after the "=" character.

If this TET variable is not defined or it has any value except "yes", the output to stderr is turned off.

Messages that are output directly to stderr without using TRACE or the like are not affected by this option. They are always output to stderr and do not show up in the journal (unless TET output capture is on - see TETWare User Guide for details).

9. Standalone test execution and debugging

For each subsystem to be tested, T2C generates not only required C sources but also makefiles. Among these is common.mk that can be found in $T2C_SUITE_ROOT/<test_dir>/tests/<test_name>/. (This .mk-file contains common definitions for building the tests and it is included in the makefiles for each test. These makefiles contain targets to build the test for execution under TET ("all") and for standalone execution and debugging ("debug").

To build a standalone version of the test, you can do the following.

  1. Make sure the T2C_SUITE_ROOT environment variable is properly defined, the code generator is built and C or C++ source for the T2C file of interest has already been generated.

  2. Change current directory to $T2C_SUITE_ROOT/<test_dir>/tests/<testname>
    Example:
        cd $T2C_SUITE_ROOT/desktop-t2c/gmodule-t2c/tests/gmodule
    
  3. Type "make clean" and then "make debug" in the command line. This will build the standalone version of the test.

During the build process the compiler will use the tet_api.h file provided by T2C Framework (it can be found in $T2C_ROOT/t2c/debug/include) instead of the one from TET. The object file for the test will be linked with dbgm.o and t2c_util_d.a from $T2C_ROOT/t2c/debug/lib instead of tcm.o and t2c_util.a. If you want, for example, to debug the test in an Eclipse project, you should specify this #include path and these linker input files in the build settings of the project.

Only a subset of TET API is supported in the debug components. Avoid using TET API directly from the T2C file. It is recommended to use special macros defined by T2C (REQ, TRACE, ABORT_TEST_PURPOSE etc) instead.

Now the test can be executed.
Syntax:
    ./<testname> [-v] [IC_number]

-v
If this option is specified, the test will be executed in a verbose mode. Note that value of the TET variable named "VERBOSE" has no effect on the standalone test execution. TET settings have nothing to do with this, anyway.

IC_number
It is a number of the invocable component to execute. Each TET compliant test contains at least one invocable component (IC). (The number of the IC is specified as the 2nd field of the tet_testlist structure.) Each invocable component consists of one or more test purposes. In the C source of the test each element of the tet_testlist array is a pair of the test purpose name and the IC number. See TET documentation for details.

If 'IC_number' is not specified, all invocable components will be executed.

Typically, there shall be only one test purpose for any invocable component generated by T2C. So IC number will be unique for each test purpose in the C file. You may change IC numbers in the tet_testlist array as you like. You can, for example, give several test purposes the same IC number, say, 999. Then execute

    ./<testname> 999

All these test purposes will be executed.

Do not change test purpose names listed in this array.

There is no TET journal for standalone test execution. All messages from the test go to stderr if the verbose mode is on or to nowhere if it is off. tet_printf() does nothing in both these cases, you should use TRACE() and TRACE0() instead.

Examples:
    ./gmodule
(Runs all the invocable components defined in this executable. Verbose mode is off.)
    ./FcCharSet -v 17

(Turns verbose mode on and runs all the test purposes in the invocable component number 17.)

To run the test under TET again, just rebuild this test:
    make clean
    make

Note that all the test purposes of a standalone test are executed in the same process to simplify debugging (some debuggers may not handle fork() calls properly by default).

WAIT_TIME configuration parameter has no effect on the standalone tests. They will not be interrupted, no matter how long they run. Otherwise they could be stopped in the middle of the debugging process (because their time had expired) which is probably not what you want.

Part II. T2C file format

1. Overview

Overall structure of a T2C file is shown below:
T2C file structure

The sections of a T2C file are marked up with special tags: <GLOBAL> ... </GLOBAL>, <CODE> ... </CODE>, etc. The code in these sections is a plain C, although the test case code (<CODE> section) can also contain placeholders for the test purpose parameters. The sets of such parameters are specified in the <PURPOSE> sections. The code generator will create a C function for each test purpose by substituting the actual parameter values instead of these placeholders. That is, the contents of the <CODE> section will be used as a template for the source code of the test.

Test code generation

Note that the T2C section tags (<GLOBAL>, <BLOCK>, etc.) should reside on separate lines in the file. Apart from the tags, these lines may contain only whitespace characters.

2. The header

Each T2C file should have a special header at the 1st non-blank line. The header usually looks like this:

#library    library_being_tested
#libsection library_section_being_tested
Example:
#library    libglib-2.0
#libsection Arrays

Here we specify the name of the library the interfaces under test belong to (#library), the name of the interface group that is tested (#libsection) and, optionally, additional requirement catalogues to be loaded (#additional_req_catalogues).

3. Comments

A line beginning with '#' is a comment (and is not considered by the generator) unless '#' is followed by "library", "libsection" or a standard C preprocessor directive like "define", "undef", "line", "ifdef", etc.

Example:
# This is a comment in a T2C file.

4. Globals

#include directives required by the tests should be specified in a <GLOBAL> section that is located right after the T2C file header. The global data and functions the tests need should also be defined here.

Example:
<GLOBAL>
#include <glib-2.0/glib.h>

guint *some_global_data = NULL;

// GCompareFunc
// A comparison function for array elements (necessary for sorting)
gint array_cmp (gconstpointer a, gconstpointer b)
{
    if (a && b)
    {
        if (*((int*)a) < *((int*)b))
        {
            return -1;
        }
        if (*((int*)a) > *((int*)b))
        {
            return 1;
        }
    }
    return 0;
}

</GLOBAL>

5. Initialization and cleanup of global data

If it is necessary to perform initialization and/or cleanup of some global resources, the respective code should be placed in the <STARTUP> section. Similarly, to clean up these objects place the appropriate code in the <CLEANUP> section.

Example:
<GLOBAL>
#include <atk/atk.h>
#include <AtkStreamableContent/MyAtkStreamableContent.h>
#include <useful_functions.h> 

AtkStreamableContent* obj = NULL;
</GLOBAL>

<STARTUP>
    g_type_init();
    OBJECT_NEW(obj, MY_TYPE_ATK_STREAMABLE_CONTENT, "MyAtkStreamableContent");
</STARTUP>

<CLEANUP>
    OBJECT_UNREF(obj);
</CLEANUP>

Do not use the <CLEANUP> section to release the resources allocated in the test cases rather than in <STARTUP>. This may cause resource leak. Place cleanup code like this into the <FINALLY> subsection of a test case block (see description of <FINALLY> below).

Each test purpose is executed in a separate process. Code from <STARTUP> and <CLEANUP> sections is executed in a parent process of the test purpose processes and it is done only once.
Each test purpose has its own copy of the global data. So it is pointless to try transfering data between different test purposes via global variables. Ideally, the execution of a test purpose should not affect any other test purpose.

6. Test case code template and parameters

Test case code template and parameters of the tests are specified in the <BLOCK> section.

6.1. Contents of <BLOCK> section

Layout of the <BLOCK> section:
<TARGETS> … </TARGETS>           
<DEFINE>  … </DEFINE>     // optional
<CODE>    … </CODE>       
<FINALLY> … </FINALLY>    // optional
<PURPOSE> … </PURPOSE>    // zero or more

You should place these subsections in the same order as they are listed above.

The <TARGETS> subsection contains the list of interfaces being tested in this test case, one per line. For example,

<TARGETS>
    g_array_set_size
    g_array_new
    g_array_sized_new
</TARGETS>

Often there will be a single target interface for each test.

The <CODE> subsection contains the common code for each test purpose of this <BLOCK>. The placeholders in the code to be filled by the particular test purpose parameters can be specified here. For example,

    int nResult = 2 + <%0%> * <%1%>;

During the generation of a C function for a test purpose the generator will replace <%0%> and <%1%> with the values from the appropriate <PURPOSE> section.

Test code generation

The <PURPOSE> subsection specifies parameters of a particular test purpose (one parameter per line). The first of these parameters will replace <%0%> in the <CODE> section, the next one will be for <%1%> and so on. Up to 256 parameters are allowed (<%0%> - <%255%>).

Example:
<PURPOSE>
    2
    a+b
</PURPOSE>

<PURPOSE> subsection is optional. If no parameters are necessary for a test purpose, you may omit this section or specify it but leave it empty, like this:

<PURPOSE>
</PURPOSE>

Multiple <PURPOSE> sections are allowed in a <BLOCK>.

For each <PURPOSE> subsection a single C function (in fact, a test purpose from TET's point of view) will be generated using the template code specified in the <CODE> section.

In the <DEFINE> section you can list the #define directives for the test purpose parameters as well as any other symbols needed by the corresponding tests. The code generator will place these directives at the beginning of each generated test purpose function. Corresponding #undef-directives will be inserted at the end of this function.

This feature can be used to replace <%…%> with more readable symbolic names which can be quite convenient.

Example:
<DEFINE>
#define QUANTITY <%0%>
#define PRICE    <%1%>
</DEFINE>
Now instead
    int nResult = 2 + <%0%> * <%1%>;
we can write the following in the <CODE> subsection:
    int nResult = 2 + QUANTITY * PRICE;
<DEFINE> subsection is optional.

The code from the <FINALLY> subsection is always executed in the test purpose regardless of whether the requirement checks in REQs pass or fail. You can use this subsection to release the resources local to the test purpose, e.g. free previously allocated memory, close files, etc. (To release global test case resources, use the <CLEANUP> section described above.) The <FINALLY> subsection is optional.

The complete example of a <BLOCK> section with all its subsections is shown below. Usage of REQ, TRACE and other macros is explained later.

Example:
<BLOCK>

<TARGETS>
    g_array_remove_index_fast
</TARGETS>

<DEFINE>
#define INDEX       <%0%>
#define VALS        <%1%>
#define TYPE        <%2%>
</DEFINE>

<CODE>
    GArray *ga = NULL;
    GArray *new_ga = NULL;
    int old_len;
    int last_el;
    TYPE vals[] = VALS;

    ga = g_array_new(FALSE, TRUE, sizeof(TYPE));
    if (ga == NULL)
    {
        ABORT_TEST_PURPOSE("g_array_new() returned NULL.");
    }

    ga = g_array_append_vals(ga, vals, sizeof(vals) / sizeof(TYPE));
    if (ga == NULL)
    {
        ABORT_TEST_PURPOSE("g_array_append_vals() returned NULL.");
    }

    old_len = ga->len;
    last_el = g_array_index(ga, TYPE, old_len - 1);

    new_ga = g_array_remove_index_fast(ga, INDEX);

    /*
     * the GArray.
     *
     * [The function returns a pointer to the modified GArray.]
     */
    REQ("g_array_remove_index_fast.03", "g_array_remove_index_fast returned NULL", new_ga);
    REQ("g_array_remove_index_fast.03", 
        "The returned GArray pointer does not match the original one.", 
        new_ga == ga);
    
    TRACE("The length of the array is %d (should be %d).", ga->len, old_len - 1);

    /*
     * Removes the element at the given index from a GArray.
     */

    REQ("g_array_remove_index_fast.01;g_array_remove_index_fast.08", "", ga->len = old_len - 1);

    if (INDEX < old_len)
    {
        /*
         * The last element in the array is used to fill in the space
         */

        REQ("g_array_remove_index_fast.02", 
            "The last element of the array did not fill in the space.", 
            g_array_index(ga, TYPE, INDEX) == last_el);
    }

</CODE>
<FINALLY>
    if (ga)
    {
        g_array_free(ga, TRUE);
    }
</FINALLY>

<PURPOSE>
    8
    {19, 89, -1, 8, 7, 190, 9, 10, 28, 56}
    int
</PURPOSE>
        
<PURPOSE>
    0
    {19, 89, -1, 8, 7, 190, 9, 10, 28, 56}
    int
</PURPOSE>
        
<PURPOSE>
    9
    {19, 89, -1, 8, 7, 190, 9, 10, 28, 56}
    int
</PURPOSE>

</BLOCK>

6.2. Attributes of <BLOCK> section

6.2.1. parentControlFunction

Each test purpose is executed in a separate child process of the main process ("controller") executed in turn by TET's Test Case Controller (tcc). However, the startup and cleanup code (from <STARTUP> and <CLEANUP> sections) runs in the controller process.

While the test purpose runs, the controller process waits for it. If a positive number of seconds is specified in the WAIT_TIME configuration parameter, the controller waits no longer than this amount of time after which the test purpose will be terminated. The test result will be set to "TIME EXPIRED" in this case.

<BLOCK> tag has an optional parentControlFunction attribute. Its value is a name of a function that will be called after the controller process has finished waiting for current test to complete. If the test has been terminated by the controller process because its time has expired, the "parent control function" (PCF) will NOT be called.

Example:
    <BLOCK parentControlFunction="my_pcf">

If this attribute is not specified, the default PCF will be used: t2c_def_pcf() defined in $T2C_ROOT/t2c/src/t2c_fork.c.

The "parent control function" has the following prototype:
    int my_pcf(pid_t returned_pid, int* status);

As its 1st argument it receives the value, returned by waitpid() function that was called in the controller process to suspend it until the child process finishes test execution.

The 2nd argument ("status") is a child process status also returned by waitpid() in the corresponding parameter.

The PCF decides whether the child process has completed the test execution successfully, that is whether the test purpose function completed and the child process exited normally. ("Normally" means "not because of an unexpected signal".) If this is the case, the PCF should return TRUE, otherwise FALSE. If it returns FALSE, the test result will be set to UNRESOLVED (unless a result with higher priority, FAIL for instance, has been set for this test before). If the PCF returns TRUE, the result code will remain unchanged.

A proper place for PCF definition is the <GLOBAL> section.

Normally, PCF will rarely be used. This functionality is for some rare situations when the test completes successfully and yet waitpid() may return -1.

7. REQ and other useful macros

The <CODE> subsection contains at least one REQ(…) call for each marked up requirement for the particular interface.

7.1. REQ

REQ macro is used to check requirements and report failures.

Syntax:
    REQ(<list_of_IDs>, <comment>, <expression>);
Example:
    REQ("fake.01;foo.04", 
        "Incorrect multiplication result", 
        ARG0 * ARG1 == nCorrect);

If <expression> is nonzero, the execution of the test purpose goes on. A message is also output to the TET journal indicating that the requirements with IDs listed in the 1st REQ argument have been checked.

Otherwise, i.e. if the requirement is violated, another kind of message is output to the TET journal that contains the list of the requirement IDs, the text of these requirements and also the comment specified as the second REQ argument. The execution of the test purpose is aborted in this case and the result code is set to FAIL.

If execution of a test purpose has not been aborted due to a failed requirement check or some unexpected failure (for example, segmentation fault, glibc error, etc.), the test result code is set to PASS.

Note that
    REQ(<list_of_IDs>, <comment>, <expression>);
is equivalent to
    TRACE0(<comment>);
    REQ(<list_of_IDs>, "", <expression>);
(See the description of TRACE0 below.)

You can use TODO_REQ() macro as the <expression> parameter for the requirements, checks for which are yet to be written. In this case no record goes to the TET journal and the execution of the test goes on as if the check has passed.

Unlike this, REQs with TRUE (or any nonzero value) as the expression do add records to the TET journal as described above, despite such checks never fail. The REQs like these can be used to report that the requirement is covered in the test even if it is satisfied automatically due to the way the test is organized.

If the REQ fails, the subsequent REQs in this test purpose WILL NOT be checked. The code in the <FINALLY> subsection (if present) will be executed and then the test purpose will terminate.

Sometimes more than one requirement is actually checked in a single REQ. It can happen that we are unable to determine which requirement has failed when the <expression> evaluates to FALSE (i.e. 0). This is often the case for get_XXX() and set_XXX() functions: we can often check that set_XXX() has worked as needed only by calling get_XXX() and then comparing the value it has returned with the value we tried to set by set_XXX. If the values do not match we cannot really say whether it was get_XXX() or set_XXX() (or both) that went wrong.

We can specify the list of corresponding requirement IDs in the 1st parameter of REQ in situations like this.

Example:
    /*
     * If both key and group_name are NULL, then comment will be written
     * above the first group in the file.
     */
    TRACE("set_comment() was called for \"%s\", get_comment() returned \"%s\".",
          COMMENT,  ret_cmnt);
    REQ("g_key_file_set_comment.03;g_key_file_get_comment.03", 
        "", 
        is_comment_equal(COMMENT, ret_cmnt));

If the expression in this REQ is false the message output to the journal will say that at least one of the listed requirements has failed.

If the failed requirement is a requirement for the application using the interface being tested (i.e. a requirement with an "app."-prefixed ID) the displayed message indicates that there can be a bug in the test case itself (perhaps a test case developer error).

7.2. TRACE and TRACE0 - trace message output to the TET journal and/or stderr

You should use TRACE instead of printf (the syntax is the same except TRACE has at least one argument besides format string):

    TRACE("The length of the array is %d (should be %d).", ga->len, old_len - 1);
instead of
    printf("The length of the array is %d (should be %d).", ga->len, old_len - 1);
TRACE0 should be used instead of calling printf() with the only argument:
    TRACE0(str);
rather than
    printf(str);

7.3. RETURN

This macro sets the result code to PASS (if no failure has occured before in this test) and ends the test purpose.

Code in the <FINALLY> section will be executed anyway.

7.4. INIT_FAILED(<message>)

One can encounter a situation when an error is found during the execution of the startup function, and it makes no sense to execute the test case after that. (For instance, the initialization may have failed for some global user-defined object.)

In this case you should call INIT_FAILED("…") in the <STARTUP> section providing appropriate description of the failure as its argument. This description will be written to the TET journal. No test purpose will be executed after that for the test cases from this T2C file. All the test purposes will be marked as UNINITIATED.

Example:
<STARTUP>
    g_type_init ();

    img = ATK_IMAGE (g_object_new (TEST_TYPE_SIMPLE_IMAGE, NULL));
    if (!img)
    {
        INIT_FAILED("Unable to create a TestSimpleImage instance.");
    }
</STARTUP>

7.5. ABORT_TEST_PURPOSE(<message>)

Use this macro to abort test purpose execution if something wrong happens (e.g. memory allocation failed for some local data, etc.) The specified message will go to the TET journal, the test purpose result will be set to UNRESOLVED and (after the execution of the <FINALLY> section) the test purpose will be terminated.

Example:
    ga = g_array_new(FALSE, TRUE, SIZE);
    if (ga == NULL)
    {
        ABORT_TEST_PURPOSE("g_array_new() returned NULL.");
    }

7.6. ABORT_UNSUPPORTED(<message>)

Use this macro in the test purpose to abort execution if the feature to be checked is not supported by the system under test. The specified message goes to the journal and should describe the situation, e.g. "Dynamic loading of modules is not supported". The result of this test purpose will be set to UNSUPPORTED.

Code in the <FINALLY> section will be executed anyway.

8. Using test data directories

8.1. Test data storage

One often needs to use external data to check some of the requirements. For example, the test will probably have to load data from a file.

It is recommended to place the data necessary for the tests in the subdirectories of the testdata directory for the test suite. These subdirectories should have the same names as the corresponding t2c files. So the tests from a t2c file will look for their data in a separate directory.

To obtain the path to these data from the test code, one can use the T2C_GET_DATA_PATH(rel_path) macro.

Suppose we need to get the path to the file myfile.txt from the test which source is in glib_key_parser.t2c. Let gkp-t2c to be the path to the test suite subdirectory (relative to $T2C_SUITE_ROOT), T2C_SUITE_ROOT = /tmp/test.

In this case T2C_GET_DATA_PATH("myfile.txt") will return "/tmp/test/gkp-t2c/testdata/glib_key_parser/myfile.txt".

The returned pointer to the string should be freed when it is no longer needed.

8.2. Using testdata_src

Sometimes the data required for a test should be built first. For instance, the sample modules used to check the interfaces from GModule library need to be compiled and linked. Only after that they can be used in the tests.

The source code of the data like this should be stored in the testdata_src/ subdirectory of testdata/ directory. When the test suite is being built, a makefile from testdata_src will be executed. This makefile should define at least the following make targets:

After the test data is built, it should be copied to the appropriate subdirectories of testdata/ (see above). Probably it should be done in the makefile from testdata_src/.

9. Checking requirements with "ext"-prefixed IDs

To enable checking of the "ext"-requirements ("external", "extensional") i.e. those with the "ext"-prefixed IDs) you can specify -DCHECK_EXT_REQS in the COMPILER_FLAGS parameter in a generator configuration file.

The configuration file may look like this in this case:
COMPILER=lsbcc
COMPILER_FLAGS=`pkg-config --cflags glib-2.0` -DCHECK_EXT_REQS
LINKER_FLAGS=`pkg-config --libs glib-2.0`
TET_SCEN_RECORD=no

Checking of the "ext"-requirements is disabled by default.