XUnit cookbook

XUnit is to XML-oriented Programming as JUnit is to Java programs : XUnit uses Active Tags as a simple framework to write repeatable tests. With XUnit, you can well-try and improve your own Active Tags program as well as external XML processes such as XSLT. But you are also encouraged to use XUnit for testing Java programs that are producing XML documents.

Quick start

A more simple tutorial is available here.

This section is a short cookbook showing you the steps you can follow in writing and organizing your own tests using Active Tags. The smartest way to achieve this is to define EXP modules and bind them to the processor instance that will run your test suite.

XUnit overview

XUnit aims to test your XML programs with more or less complex scenarii. Once complete, your test suite can be run as an Active Tags program in order to produce a report of your tests.

XUnit is very close to JUnit, its counterpart framework testing for Java programs : you'll find <xunit:test-case>s that contain one of your scenarii to test and some assertions that allow to check if the results get are those expected. Like JUnit, there are assertions that check basic types such as <xunit:assert-boolean-equals>, <xunit:assert-number-equals> and <xunit:assert-string-equals>. Unlike JUnit, there are also assertions that check types related to XML such as : <xunit:assert-node-equals> and <xunit:assert-attributes-equals>. There is also a mean for testing XSLT templates with the <xunit:apply-xslt-template> element.

Designing a test suite consist on :

  • writing each scenario with XUnit,
  • designing a set of input datas (usually in XML), some intended to succeed, other to fail,
  • writing the assertions that check successes or failures,
  • launching the test suite and optionally building an HTML report.

XUnit step by step

In this tutorial, one consider that you have to test a very simple process that consist on :

  • parsing an XML file,
  • applying an XSLT stylesheet,
  • saving the result to a file.

You intend to test and debug the Active Sheet that does the job (see below) and the XSLT stylesheet that perform the transformation.

The user application

The purpose of this first step is to build the user application to test. The actual test will take place in the next section.

Note

If you don't have to test an Active Tags application, but rather a single XSLT stylesheet, you can skip this section. If you have to connect several stylesheets/filters in a pipeline, you ought to use Active Tags and read this section.

As a good boy or girl, you designed the whole process in a separate module. If you don't know how to achieve this or don't understand what a module is, please refer before to the tutorial section. In theses tutorials, have a look at the SQL example that will show you that 2 scripts (the former invokable from the command line, the latter runnable within a Web application) that are almost the same can be advantageously rewritten as a single component (a module), and invoked from a Web application as well as from a batch script.

If the functionality of your application is not defined in a module but is rather an active sheet, please have a look at the "test suite of the tutorials", that consist on testing the batch scripts of the tutorial section, and that will show you how to test them.

Thus, your process -as a module- looks like this :

[doc/tutorial/xunit/po-module.exp]

<?xml version="1.0" encoding="iso-8859-1"?>
<exp:module version="1.0" target="acme" exp:enable-prefixes="xcl acme" xmlns:xcl="http://ns.inria.org/active-tags/xcl" xmlns:exp="http://ns.inria.org/active-tags/exp" xmlns:acme="urn:acme-purchase-orders"> <!-- Usage : <acme:transform-purchase-order id="5678" output="file:///path/to/po.html"/> --> <exp:element name="acme:transform-purchase-order"> <xcl:parse name="po"
source="../basic/xslt/web/purchase-orders/{ value( $exp:params/@id ) }.xml"/> <xcl:transform output="{ value( $exp:params/@output ) }" stylesheet="po.xsl" source="{ $po }"/> </exp:element> </exp:module>

This module just declares <acme:transform-purchase-order> as a "macro" element that will run the active tags within when invoked. It accepts the @id and @output attributes and a schema should be designed to ensure that when you use it you do use theses attributes, but it's not the purpose here.

Your tests intend to proove that your tag <acme:transform-purchase-order> and the stylesheet used within are reliable in your application. In the real world, you would certainly have more several complex processes to expose as active tags. If you had to connect several stylesheets in a pipeline (and eventually other filters), you should refer to the Tips & Tricks section.

Your catalog, that refers to your module :

[doc/tutorial/xunit/po-catalog.cat]

<?xml version="1.0" encoding="iso-8859-1"?>
<cat:catalog xmlns:cat="http://ns.inria.org/active-catalog" xmlns:exp="http://ns.inria.org/active-tags/exp"> <cat:resource name="urn:acme-purchase-orders" uri="po-module.exp" selector="exp:module"/> </cat:catalog>

This catalog will indicate to the engine that would encounter an active tag bound to the "urn:acme-purchase-orders" namespace URI to look for its implementation within the file given (that is to say, your module). The @selector attribute specifies thanks to a qualified name that the file "po-module.exp" is a module, and the resource (the module) will be supplied only on module requests.

An example of Active Sheet that would use your module (no comment) :

[doc/tutorial/xunit/po-active-sheet.xcl]

<?xml version="1.0" encoding="iso-8859-1"?>
<xcl:active-sheet xmlns:xcl="http://ns.inria.org/active-tags/xcl" xmlns:acme="urn:acme-purchase-orders"> <acme:transform-purchase-order output="output.xml" id="1234"/> </xcl:active-sheet>

The active sheet above is invokable from the command line. Please refer to the tutorial section if you had to make it runnable as a Web application.

Open a console and at the prompt, type the following command from the RefleX home directory (note that the (line cut) (line cut) icon means that you MUST NOT insert a line break) :

 $ java -jar reflex-0.4.0.jar (line cut)
     -c doc/tutorial/xunit/po-catalog.cat (line cut)
     run doc/tutorial/xunit/po-active-sheet.xcl

The output file should have been created.

Designing the test suite

At this point, you have designed your application to test ; you will invoke your module within XUnit exactly in the same manner than the Active Sheet above. You just have to connect your active tag to the input to test and compare the output produced to those expected.

As your tests will be written in a separate module, you also have to write a launcher :

[doc/tutorial/xunit/run-tests.xcl]

<?xml version="1.0" encoding="iso-8859-1"?>
<xcl:active-sheet xmlns:xcl="http://ns.inria.org/active-tags/xcl" xmlns:test="urn:acme-purchase-orders:test-suite"> <test:all/> </xcl:active-sheet>

The role of this launcher is to run all the test suites expected in your application. Here, a single test suite is defined, which is invoked with <test:all>.

Now, you have to write the module "urn:acme-purchase-orders:test-suite" for testing purpose that should look like this :

[doc/tutorial/xunit/use-cases.exp]

<?xml version="1.0" encoding="iso-8859-1"?>
<exp:module version="1.0" target="test" exp:enable-prefixes="xcl io test xunit acme" xmlns:exp="http://ns.inria.org/active-tags/exp" xmlns:xcl="http://ns.inria.org/active-tags/xcl" xmlns:io="http://ns.inria.org/active-tags/io" xmlns:test="urn:acme-purchase-orders:test-suite" xmlns:acme="urn:acme-purchase-orders" xmlns:xunit="http://reflex.gforge.inria.fr/xunit.html"> <!-- Things to NOTICE : -target="test" : this is the definition of your tests, as a module -xmlns:acme="urn:acme-purchase-orders" this is the module to test --> <!-- <test:all/> is the entry point of your tests --> <exp:element name="test:all"> <!--run all tests--> <test:all-purchase-order/> <!--here is the place for other calls--> <!--merge all the reports to a single one--> <xunit:merge-reports name="Summary of ACME tests" source="{ io:file( '.' ) }"
output="{ io:file( 'po-error-report.xml' ) }"/> </exp:element> <exp:element name="test:all-purchase-order"> <test:purchase-order name="report-ok" output-expected="output-expected-ok.xml"
label="Purchase order transformation"/> <!--we intensionally cause a failure on the 2 following tests--> <test:purchase-order name="report-bad-element-name"
output-expected="output-expected-bad-element-name.xml"
label="Purchase order transformation with a bad element in the output expected"/> <test:purchase-order name="report-bad-attributes"
output-expected="output-expected-bad-attributes.xml"
label="Purchase order transformation with bad attributes in the output expected"/> </exp:element> <!-- Usage : <test:all-purchase-order name=[name of the test] label=[label of the test] output-expected=[a file name] /> --> <exp:element name="test:purchase-order"> <xunit:test-case name="{ value( $exp:params/@name ) }" label="{ value( $exp:params/@label ) }"> <!--run the active tag to test--> <acme:transform-purchase-order output="result.xml" id="1234"/> <!--retrieve the result and the output expected--> <xcl:parse name="po" source="result.xml"/> <xcl:parse name="oe" source="{ value( $exp:params/@output-expected ) }"/> <xcl:echo value="This text is captured by the report"/> <!--check if they are equals--> <xunit:assert-node-equals result="{ $po }" expected="{ $oe }" recurse="true"/> </xunit:test-case> <!--other test cases here--> </exp:element> <!--other test definitions here--> </exp:module>

As expected, the <test:all> element launches some tests ; here, there is a single one related to purchase orders : <test:all-purchase-order>. This last element will invoke 3 times the same test ; only the first test will succeed ; the 2 others contains intentional errors just to see what we'll get in the error report. After running the test, your normal behaviour would to check your input datas, check your output expected, and check your programs (Active Sheet or XSLT or both), then correct what is wrong and test again until no more failure is reported : when the output expected matches the output produced, you might trust your module. Here, we have introduced 2 wrongs output to see what gives the report.

Note

If you just have to test a single XSLT stylesheet, you can replace <acme:transform-purchase-order> with the <xcl:transform> active tag or with any sequence of tags to test. Things related to the "acme" module can then be discarded.

<test:purchase-order> defines the actual test ; the process to test is the custom tag <acme:transform-purchase-order> which is run within the <xunit:test-case> element in order to set the boundaries of the test and create a test report. As the "acme" custom tag is intended to produce an XML document, we just have to parse the result get and compare it with the output expected with the <xunit:assert-node-equals> element. Of course, other assertions are available if other results had to be checked (please refer to the module reference).

The input to test is one of the purchase orders used in the XSLT tutorial. This input is parsed and transformed with XSLT to another XML file. The output expected is as follow :

[doc/tutorial/xunit/output-expected-ok.xml]

<?xml version="1.0" encoding="UTF-8"?>
<purchase-order id="1234"> <item currency="dollar" quantity="1" price="138.95" part-number="321">Lawnmower</item> <item currency="dollar" quantity="2" price="29.99" part-number="654">Baby monitor</item> <item currency="euro" quantity="3" price="11.27" part-number="987">Roquefort Cheese</item> </purchase-order>

To see the test fail, 2 other outputs are introduced with the following errors :

  • the former has not the right root element, which is <purchase:order> (there is a namespace URI) instead of <purchase-order>,
  • the latter has bad attributes in its <item>s element : the first omitted the @currency attribute, the second has the @currency attribute with a bad value, the third defined the exceeding attribute @country.

The last piece of the puzzle to make all that runnable is the catalog that declares the resources required : your test suite, your "acme" module to test (for which you already defined a catalog) and the XUnit stuff (for which a pre-defined catalog already exists) :

[doc/tutorial/xunit/use-cases.cat]

<?xml version="1.0" encoding="iso-8859-1"?>
<cat:catalog xmlns:exp="http://ns.inria.org/active-tags/exp" xmlns:cat="http://ns.inria.org/active-catalog"> <!--your test suite, as a module--> <cat:resource name="urn:acme-purchase-orders:test-suite" uri="use-cases.exp"
selector="exp:module"/> <!--refers to the XUnit catalog--> <cat:next-catalog catalog="res:///org/inria/ns/reflex/util/xunit/xunit.cat"/> <!--your catalog that defines the module to test--> <cat:next-catalog catalog="po-catalog.cat"/> </cat:catalog>

As all the resources used here are independant, the order in which they are declared doesn't care.

Don't forget to refer to the XUnit own catalog otherwise your test suite won't run. Your catalog definition should contain the following line :

  <cat:next-catalog catalog="res:///org/inria/ns/reflex/util/xunit/xunit.cat"/>

If you don't insert the references to the next catalog, you can alternatively run the test suite by repeating the "-c" option from the command line interface, in order to tell to the engine to use several catalogs, as shown below.

Running XUnit

Before running your tests, ensure that you have downloaded XUnit ; XUnit is packed in a separate jar file which is in the "full binary release" and the "source release". If you have downloaded the "minimal binary release", please download the XUnit application (a single jar file). In any case, ensure that the XUnit jar file is set in your classpath with the RefleX jar file, as explained in the install section.

Running your test suite from the command line (to launch from the RefleX home directory) :

 $ java -cp reflex-0.4.0.jar:xunit-0.4.0.jar (line cut)
     org.inria.ns.reflex.ReflexCLI -c doc/tutorial/xunit/use-cases.cat (line cut)
     run doc/tutorial/xunit/run-tests.xcl

If your catalogs were not linked with the <cat:next-catalog> element, you could also specify all the catalogs to use like this :

 $ java -cp reflex-0.4.0.jar:xunit-0.4.0.jar (line cut)
     org.inria.ns.reflex.ReflexCLI (line cut)
     -c doc/tutorial/xunit/use-cases.cat (line cut)
     -c doc/tutorial/xunit/po-catalog.cat (line cut)
     -c res:///org/inria/ns/reflex/util/xunit/xunit.cat (line cut)
     run doc/tutorial/xunit/run-tests.xcl

When the tests end, a summary is displayed to the standard output :

Test suite    : Summary of ACME tests
   Test cases : 3 (88)
   Errors     : 2 (5)
   Failures   : 0 (0)
   Files      : 3

The number of errors and failures regarding the total of test cases is followed by the number (in parenthesis) of atomic tests, errors and failures.

Checking the error reports

Each test case run will produce an error report file named from the name of the test case, like this : "[test-case-name]-err.xml".

As our test has been launched 3 times, we get 3 reports :

  • [doc/tutorial/xunit/report-ok-err.xml]

    <?xml version="1.0" encoding="UTF-8"?>
    <test-case errors="0" failures="0" label="Purchase order transformation" name="report-ok"
    skip="0" tests="40"> <sysout>This text is captured by the report </sysout> <syserr/></test-case>
    When no error or failure are found, it is just a summary. Notice the <sysout> element that contains the text that was echoed to the standard output.
  • [doc/tutorial/xunit/report-bad-element-name-err.xml]

    <?xml version="1.0" encoding="UTF-8"?>
    <test-case errors="2" failures="0"
    label="Purchase order transformation with a bad element in the output expected"
    name="report-bad-element-name" skip="0" tests="7"> <nodes expected="/" result="/"> <nodes expected="/purchase:order[1]" result="/purchase-order[1]" xmlns:purchase="urn:acme-purchase-order"> <error type="Local name comparison"> Expected "order" but was "purchase-order" </error> <error type="Namespace URI comparison"> Expected "urn:acme-purchase-order" but was "" </error></nodes></nodes> <sysout>This text is captured by the report </sysout> <syserr/></test-case>
    When errors or failures occur, the detail of each problem is reported. In this case, as we have compared XML documents, the canonical paths of the nodes in fault are reported, which that allows to localize where the problem occurred. When comparing XML documents, the content of the elements are checked only if the names are matching (which is not the case here).
  • [doc/tutorial/xunit/report-bad-attributes-err.xml]

    <?xml version="1.0" encoding="UTF-8"?>
    <test-case errors="3" failures="0"
    label="Purchase order transformation with bad attributes in the output expected"
    name="report-bad-attributes" skip="0" tests="41"> <nodes expected="/" result="/"> <nodes expected="/purchase-order[1]" result="/purchase-order[1]"> <nodes expected="/purchase-order[1]/item[1]" result="/purchase-order[1]/item[1]"> <error type="Attribute existence"> Unexpected attribute currency="dollar" </error></nodes> <nodes expected="/purchase-order[1]/item[2]" result="/purchase-order[1]/item[2]"> <error type="Attribute value comparison"> @currency : expected "$" but was "dollar" </error></nodes> <nodes expected="/purchase-order[1]/item[3]" result="/purchase-order[1]/item[3]"> <error type="Attribute existence"> Missing attribute country="france" </error></nodes></nodes></nodes> <sysout>This text is captured by the report </sysout> <syserr/></test-case>
    Various errors on attributes have been introduced intentionally in this third example. The +1 test (comparing to the number of tests of the first example) is due to the exceeded attribute which has been reported here as an error.

The <xunit:merge-reports> element allow to merge several reports to a single one with a summary. In our example, the report is :

[tutorial/xunit/po-error-report.xml]

<?xml version="1.0" encoding="UTF-8"?>
<test-suite errors="2" errors-detail="5" failures="0" failures-detail="0"
name="Summary of ACME tests" reports="3" skip="0" test-cases="3" tests="88"> <!--here are merged the 3 documents above--> </test-suite>

A simple XSLT stylesheet can display a smart HTML version of the report. Those used for publishing the documents of RefleX gives :

[tutorial/xunit/po-error-report.xml]

 
Skip
Test name
TestsErrorsFailure
0
Summary of ACME tests
3
(88)
2
(5)
0
(0)
1/3Purchase order transformation with bad attributes in the output expected4130
2/3Purchase order transformation with a bad element in the output expected720
3/3Purchase order transformation4000

The number of errors and failures regarding the total of test cases is followed by the number (in parenthesis) of atomic tests, errors and failures.

XUnit use cases

Testing a single template of a stylesheet

It might be sometimes useful to test each template of an XSLT stylesheet separately. The <xunit:apply-xslt-template> element has been designed for this purpose. It can be used at any place that an action is expected.

    <!--the input to test-->
    <xcl:parse name="document" source="file:///path/to/file.xml"/>
    <!--test a single template-->
    <xunit:test-case name="xslt-template-test">
        <!--the template that matches the node selected will be applied-->
        <xunit:apply-xslt-template node="{ $document/node/to/test }" output="file:///path/to/output.txt"
stylesheet="file:///path/to/file.xsl"/> <!--check the fragment result--> <xcl:parse name="output" source="file:///path/to/output.txt"/> <xcl:parse name="output-expected" source="file:///path/to/output-expected.txt"/> <xunit:assert-node-equals expected="{ $output-expected }" result="{ $output }"/> </xunit:test-case> <!--another test--> <xunit:test-case name="xslt-template-another-test"> <xunit:apply-xslt-template node="{ $document/another/node/to/test }" output="file:///path/to/output.txt"
stylesheet="file:///path/to/file.xsl"/> <!--check the result--> <xcl:parse name="output" source="file:///path/to/output.txt"/> <xcl:parse name="output-expected" source="file:///path/to/output-expected.txt"/> <xunit:assert-node-equals expected="{ $output-expected }" result="{ $output }"/> </xunit:test-case> <!--and so on...-->

Ensure to pass absolute URLs to the <xunit:apply-xslt-template> element. If you have relative ones, you can refer to this tip.

Testing a failure

In some applications, one must ensure that in certain circumstances an operation do fail. Thus, the failure denotes a test success, and the non-failure denotes a test error. Here is a snippet code that can be used for this purpose :

    <!--this operation MUST fail-->
    <do:something expected="to-fail"/>
    <!--if this point is reached safely, an error is reported-->
    <xunit:error message="This test must fail but it didn't." type="Failure test"
xcl:if="{ not( $xcl:error ) }"/> <!--catch all troubles--> <xcl:fallback> <!--the failure is reported as a success--> <xunit:assert-true result="{ true() }"/> <xcl:echo value="Failure trace : { $xcl:error }"/> </xcl:fallback>

Testing partial results

Using <xunit:assert-node-equals> when the result is known in advance is very convenient, but sometimes, the result expected is not very predictive and may vary whereas it remains a success. For example, when using the generate-id() function in XSLT, the computed result may differ from the expected output because they haven't been produced with the same XSLT processor.

There are 2 workarounds that allow to fix this :

  • test the structure and perform a validation against a schema instead of a comparison,
  • filter the nodes that might vary with an XCL filter, and compare them after filtering.

More XUnit examples

You can have a look at the test suite that RefleX uses for its own testings, and those that involves more complex scenarii in the XML-oriented Programming section.

XUnit ingredients

XUnit : XUnit module
XUnit namespace URI : http://reflex.gforge.inria.fr/xunit.html
Usual prefix : xunit
Elements Foreign attributes
<xunit:test-case>
<xunit:assert-boolean-equals>
<xunit:assert-true>
<xunit:assert-false>
<xunit:assert-number-equals>
<xunit:assert-value-equals>
<xunit:assert-string-equals>
<xunit:assert-string-starts-with>
<xunit:assert-node-equals>
<xunit:assert-attributes-equals>
<xunit:assert-node-name-equals>
<xunit:assert-content-equals>
<xunit:assert-equals>
<xunit:error>
<xunit:failure>
<xunit:skip>
<xunit:comment>
<xunit:apply-xslt-template>
<xunit:merge-reports>
@xunit:version

Must be an adt:expression that computes an object of the type expected.
Must be a hard-coded value (litteral)
Can be either a hard-coded value or an adt:expression
This material may be missing
Denotes a value to use by default

XUnit requirements

XUnit is a module for RefleX.

In order to make XUnit runnable, you have to follow the following guidelines.

Refer to the XUnit application : you can specify it either in your own catalog file by adding the following entry :

<cat:next-catalog catalog="res:///org/inria/ns/reflex/util/xunit/xunit.cat"/>

or by the command line interface :

 $ java -cp reflex-0.4.0.jar:xunit-0.4.0.jar (line cut)
     org.inria.ns.reflex.ReflexCLI run /path/to/my-tests.xcl (line cut)
     -c /path/to/my-catalog.cat (line cut)
     -c res:///org/inria/ns/reflex/util/xunit/xunit.cat

Other catalogs can be added with the -c option if necessary ; you don't need to supply to the engine a catalog for common Active Tags modules (the engine uses a bootstrap catalog that declares the required built-in modules).

XUnit/WUnit version

You can display the version with this :

 $ java -cp reflex-0.4.0.jar:xunit-0.4.0.jar (line cut)
    org.inria.ns.reflex.util.Version org.inria.ns.reflex.util.wunit.XClient

You could also read the name of the jar file, but if it has been renamed, that might be useful...

XUnit elements

<xunit:test-case>

Set the boundaries of a test case. Each assertion defined within that fails is reported as an error. If the test case fails to run, a failure is reported.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
bothname#xs:string The name of the test case. As it is used for building the file name of the report, it can contain "/" in order to write it to subdirectories.
bothlabel#xs:string A short description of the test case.

Content : ( <xunit:assert-*>[STATIC] | <*:*>[DYNAMIC] )*


<xunit:assert-boolean-equals>

Assert that 2 booleans are equals. Each argument is converted to a boolean before testing.

Report an error if the boolean value of the first argument is different of the boolean value of the second one.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The computed result.
runtimeexpected#an object The result expected.

<xunit:assert-true>

Assert that a boolean is true. The argument is converted to a boolean before testing.

Report an error if the boolean value of the argument is false.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The argument to test.

<xunit:assert-false>

Assert that a boolean is false. The argument is converted to a boolean before testing.

Report an error if the boolean value of the argument is true.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The argument to test.

<xunit:assert-number-equals>

Assert that 2 numbers are equals. Each argument is converted to a number before testing.

Report an error if the number value of the first argument is different of the number value of the second one.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The computed result.
runtimeexpected#an object The result expected.

<xunit:assert-value-equals>

Assert that the value of 2 object are equals. The value() function is applied to each argument before testing.

Report an error if the value of the first argument is different of the value of the second one.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The computed result.
runtimeexpected#an object The result expected.

<xunit:assert-string-equals>

Assert that 2 strings are equals. Each argument is converted to a string before testing.

Report an error if the string value of the first argument is different of the string value of the second one.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The computed result.
runtimeexpected#an object The result expected.

<xunit:assert-string-starts-with>

Assert that a string starts with a given string. Each argument is converted to a string before testing.

Report an error if the string value of the first argument doesn't start with the string value of the second one.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The computed result.
runtimeexpected#an object The result expected.

<xunit:assert-node-equals>

Assert that 2 nodes are equals.

Report an error if they are not of the same type or if they have not the same name. Check the attributes and the content if any.

"XML diff" and XUnit

XUnit is not an "XML diff"-like tool. Specifically, it will compare the content of 2 elements if their name and attributes matched, otherwise, the 2 next nodes will be compared.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#xml:node The computed result.
runtimeexpected#xml:node The result expected.
bothrecurseoptional Indicates if the elements within have to be tested or not. In any case the recursion is done only if the names had matched.
#xs:stringtruedefault valueThe elements within are tested.
falseThe elements within are tested on their name but neither on their content nor their attributes.
bothignoreoptional Indicates which type of content must be skip.
missing attributedefault value The content of the elements are tested node by node.
#xs:stringcommentThe comments are skipped.
PIThe processing instructions are skipped.
spacesThe white-spaces are skipped.
#xs:string Several values from the above list, separated with blanks.
bothnormalizeoptional Indicates if the text nodes have to be normalized or not.
#xs:stringtrueThe text nodes are normalized before comparison.
falsedefault valueThe text nodes are compared as-is.

Note

The flags are not yet implemented in this version.


<xunit:assert-attributes-equals>

Assert that all attributes of 2 elements are equals.

Report an error if an attribute defined in result is not defined in expected and vice-versa. Report an error if an attribute defined in result has not the same value than those defined in expected.

The @xml:base attribute is ignored.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#xml:element The computed result.
runtimeexpected#xml:element The result expected.

<xunit:assert-node-name-equals>

Assert that 2 elements have the same qualified name.

Report an error if the local names are different. Report an error if the namespace URIs are different.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#xml:node The computed result.
runtimeexpected#xml:node The result expected.

<xunit:assert-content-equals>

Assert that 2 nodes have the same content.

Report an error for each node in the content that doesn't match.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#xml:node The computed result.
runtimeexpected#xml:node The result expected.

<xunit:assert-equals>

Assert that 2 objects are equal.

Report an error if they are not equal.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimeresult#an object The computed result.
runtimeexpected#an object The result expected.

<xunit:error>

Report an error.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
bothtype#xs:string The type of the error.
bothmessage#adt:list of #xml:node The XML message of the error.
#xs:string The message of the error.

<xunit:failure>

Report a failure.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
bothmessage#xs:string The message of the failure.

<xunit:skip>

Skip a test.

Under certain circumstances, a test can be reported as "skipped" thanks to this active tag. This can occur when special conditions are met, when a connexion to a database can't be established, or simply on user request.

The test case that contains this instruction is reported as "skipped".

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
bothmessage#xs:string The reason that explains why the host test case has been skipped.

<xunit:comment>

Report a comment in the test case.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
bothmessage#xs:string The comment to report in the test case.

<xunit:apply-xslt-template>

Apply a single template from a stylesheet.

This action doesn't report an error, it just applies the stylesheet on the node selected.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimestylesheet#xs:anyURI The absolute URI to the stylesheet.
runtimenode#xml:node The node on which the stylesheet will be applied.
bothmodeoptional Indicates which mode of the template have to be applied.
missing attributedefault value No mode specified.
#xs:QName The name of the mode of the template to apply.
runtimeoutput#xs:anyURI The URI to the output file where will be serialized the result.

<xunit:merge-reports>

Merge a set of XUnit reports to a single file with a summary.

This action doesn't report an error, it is just a convenient post-process tool.

Attributes runtime | hard-coded | both
 NameTypeValue optional | default value
runtimename#xs:string The name of the global report.
runtimesource#xs:anyURI The URI of the directory where are located the report files. The subdirectories will be browsed. The files must be named "*-err.xml".
runtimeoutput#xs:anyURI The URI of the output file where will be serialized the global report.

Foreign attributes

@xunit:version

  • Priority : 0

The version of the XUnit module to use. This attribute should be encountered before any XUnit element, but it takes precedence on the element inside which it is hosted.

Report structure

Each report contains the <test-case> element that contains the summary of the tests launched in the following attributes :

  • @tests indicates the number of atomic tests realized. An atomic test consist on testing a single assertion on a primitive type (string, boolean, number). A complex test consist on testing several other tests. For example, testing 2 XML documents is a complex test because each node will be checked.
  • @errors indicates the number of assertions that weren't true.
  • @failures indicates the number of test cases that fail to run. A single report should report 1 or 0, but if several reports are merged, a greater number could be reported.
  • @name is the name of the test, and @label a short description.

Within the <test-case> element, other elements are used for reporting various informations :

  • <sysout> : when some text is echoed to the standard output
  • <syserr> : when some text is echoed to the standard error output
  • <skip> : when using the <xunit:skip> element
  • <comment> : when using the <xunit:comment> element
  • <error> : when an error is reported ; it contains a @type attribute and a text content.
  • <nodes> : if some nodes are involved, several nested <nodes> elements can wrap one or several <error> elements. The canonical paths of the node expected and the node get are indicated in the @expected and @result attributes of the <nodes> elements.

<test-suite> is used as the root element when running <xunit:merge-reports>. It summerize the number of tests, errors, and failures that occurred in the following attributes :

  • @name : the name of the test suite
  • @reports : the number of report files
  • @test-cases : the number of test cases
  • @tests : the number of all tests performed
  • @skip : the number of test cases skipped
  • @errors : the number of test cases that report at least one error
  • @errors-detail : the number of errors reported in all test cases
  • @failures : the number of test cases that report at least one failure
  • @failures-detail : the number of failures reported in all test cases

Examples :

HTML report

A default XSLT stylesheet is bundled with XUnit ; you can refer to it for creating an HTML report provided that xunit-0.4.0.jar has been added to your classpath :

  <!--create an HTML version of the report-->
  <xcl:transform output="report-err.html" source="report-err.xml"
stylesheet="res:///org/inria/ns/reflex/util/xunit/html-report.xsl"/>

Let the stylesheet reference as-is, but you can change the source and to output to whatever is relevant for your test suite.

Background

Testing Java classes with XUnit

You might find usefull to test your own Java classes or components that are related to XML (for example because the output produced is an XML file, DOM document, or SAX stream). You might also find usefull to test your own applications that are using some tools not yet supplied in RefleX, such as your favorite XQuery engine.

Whatever the Java program you have to test with XUnit, the first thing you have to achieve is to expose your component as some active tags. For this purpose, please refer to the "How-To" section.

For example, if you want to design a wrapper for a given XQuery engine, just have a look at an existing example (say, the Neko HTML parser) and adapt it to the interface to use.

Just remember that you have to create a module that binds your Java wrapper to your new tag, and a catalog that declares that module. Please refer to the "How-To" section for this purpose.

Behind XUnit

XUnit is an Active Tags application, available as a module, and made completely of tags. As usual, a catalog contains the mappings for this module.

Testing XUnit with XUnit

The tests intend to cover the viability of the assertions that are involving nodes (elements, attributes), which is achieved thanks to primitive tests (assert number equals).

The test cases of XUnit using XUnit expect a special arrangement : one must differ the actions to test from the assertions to perform ; however, both are using XUnit instructions. To achieve this, the actions to test are run outside the <xunit:test-case> element.

Instead of serializing a report, it is kept in memory as a DOM document ; the number of tests, errors, or failures required are then checked with the expected value.

The test cases and the results are available in the Unit tests for RefleX.

Testing Web applications with XUnit

Before 0.3.1, the capabilities of XUnit were restricted to batch processes. XUnit is now relevant for testing a business model as well as for testing a Web application.

A new specific module has been designed for testing the kinematic of Web applications : WUnit.