www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Unit tests in D

reply bearophile <bearophileHUGS lycos.com> writes:
dmd D 2.045 improves the built-in unit tests resuming their run when they fail
(it reports only the first failed assert for each unit test).

There are many features that today a professional unittest system is expected
to offer, I can write a long list. But in the past I have explained that it's a
wrong idea to try to implement all those things in dmd.

So a good solution that has all the advantages is:
- To add dmd the "core" features that are both important and hard to implement
nicely in an external library or IDE (or make D more flexible so writing such
libs is possible, but this can be not easy).
- To add dmd the compile-time reflection, run-time reflection or hooks that
external unittest libs/IDEs can use to extend the built-in unit testing
functionality.

It's not easy to find such core features (that can be used by an IDE, but are
usable from the normal command line too), this is my first try, and I can be
wrong. Feel free to add items you think are necessary, or to remove items you
know can be implemented nicely in an external library. Later I can write an
enhancement request.

---------------------

1) It's very useful to have a way to catch static asserts too, because
templates and other things can contain static asserts too, that for example can
be used to test if input types or constants are correct. When I write unittests
for those templates I want to test that they actually statically asserts when I
use them in a wrong way.

A possible syntax (this has to act like static assert):
static throws(foo(10), StaticException1);
static throws(foo(10), StaticException1, StaticException2, ...);

A version that catches run-time asserts (this has to act like asserts):
throws(foo(10), Exception1);
throws(foo(10), Exception1, Exception2, ...);


There are ways to partially implement this for run-time asserts, but badly:

void throws(Exceptions, TCallable, string filename=__FILE__, int line=__LINE__)
           (lazy TCallable callable) {
    try
        callable();
    catch (Exception e) {
        if (cast(Exceptions)e !is null)
            return;
    }

    assert(0, text(filename, "(", line, "): doesn't throw any of the specified
exceptions."));
}


If that syntax is asking too much, an intermediate solution can be acceptable,
like (but the point of this list is to enumerate important things that are not
easy to implement in an external library):

static throws!(StaticException1)(foo(10));
static throws!(StaticException1, StaticException2)(foo(10));

throws!(Exception1)(foo(10));
throws!(Exception1, Exception2(foo(10));

---------------------

2) Names for unittests. Giving names to things in the universe is a first
essential step if you want to try to understand some part of it. The compiler
makes sure in each module two unittest tests don't share the same name. An
example:

int sqr(int x) { return 2 * x; }

/// asserts that it doesn't return a negative value
unittest(sqr) {
    assert(sqr(10) >= 0);
    assert(sqr(-10) >= 0);
}

---------------------

3) Each unittest error has to say the (optional) name of the unit tests it is
contained into. For example:

test4(sqr,6): unittest failure

---------------------

4) The dmd JSON output has to list all the unitttests, with their optional name
(because the IDE can use this information to do many important things).

---------------------

5) Optional ddoc text for unittests (to allow IDEs to answer the programmer the
purpose of a  specific test that has failed).

Unittest ddocs don't show up inside the HTML generated with -D because the user
of the module doesn't need to know the purpose of its unittests. So maybe they
appear only inside the JSON output.

---------------------

6a) A way to enable only unittests of a module. Because in a project there are
several modules, and when I work on a module I often want to run only its
unittests. In general it's quite useful to be able to disable unittests.

6b) A way to define groups of unittests (that cross modules too): because you
sometimes want to unittest for a feature spread in more than one module, or you
want to tell apart quick and slow unittests, to run fast unittests often and
slow ones only once in a while.


One way to support unittest groups is to allow for tag names after the unittest
name that define overlapping groups:

unittest(foo_name, group1_tag_name, group2_tag_name, ...) {...}


One way to extend the unittest compiler switch syntax to use those tags:
allowing multiple unittest switches in a command line, allowing a = that can be
used to specify a module name to run the unittests of, or a =tag: to specify
one tag name:

-unittest=module_foo -unittest=tag:group1_tag_name

This is just a first idea for unittest groups management, better designs are
possible.

====================================


Three more half-backed things, if you know how to improve/design this ideas you
can tell me:

A) Serious unittest system needs a way to allow sharing of setup and shutdown
code for tests.

From Python unittest: a test fixture is the preparation needed to perform one
or more tests, and any associate cleanup actions. This may involve, for
example, creating temporary or proxy databases, directories, or starting a
server process, etc.

Fixtures can be supported at the package, module, class and function level.
Setup always runs before any test (or groups of tests).

setUp(): Method called to prepare the test fixture.
tearDown(): Method called immediately after the test method has been called and
the result recorded.

---------------------

B) There are situations when you don't want to count how many unittests have
failed, but you want to fix a bug, with a debugger. For this it can be useful a
command line switch to turn unittest asserts back into normal asserts.

---------------------

C) I'd like a way to associate a specific unittest to a function, class or
module, or something else that is being tested, because this can be useful in
several different ways. But this seems not easy to design. Keep in mind that
tests can be in a different module. I don't know how to design this, or if it
can be designed well.

---------------------

Bye,
bearophile
May 04 2010
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-04 21:24:50 -0400, bearophile <bearophileHUGS lycos.com> said:

 There are ways to partially implement this for run-time asserts, but badly:
 
 void throws(Exceptions, TCallable, string filename=__FILE__, int line=__LINE__)
            (lazy TCallable callable) {
     try
         callable();
     catch (Exception e) {
         if (cast(Exceptions)e !is null)
             return;
     }
 
     assert(0, text(filename, "(", line, "): doesn't throw any of the 
 specified exceptions."));
 }
Your 'throws' template seems good. Should create std.testing and include it there. Also, perhaps it'd work to use a double-template for this: template throws(Exceptions...) { void throws(TCallable, string filename=__FILE__, int line=__LINE__) (lazy TCallable callable) { ... } } I know this trick was working in D1, but I'm not sure if it wasn't broken in D2. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 04 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
(it reports only the first failed assert for each unit test).<
I was wrong (or the behaviour on this is mutable). Michel Fortin:
 Your 'throws' template seems good. Should create std.testing and 
 include it there.
 Also, perhaps it'd work to use a double-template for this:
 	template throws(Exceptions...) {
 		void throws(TCallable, string filename=__FILE__, int line=__LINE__)
Yes, thank you, with this suggestion of yours it works in D2 too: class FooException : Exception { this(string msg) { super(msg); } } class OtherException : Exception { this(string msg) { super(msg); } } int sqr(int x) { if (x < 0) throw new FooException(""); return x * 2; } template throws(Exceptions...) { bool throws(TCallable)(lazy TCallable callable) { try callable(); catch (Exception e) { /*static*/ foreach (Exc; Exceptions) if (cast(Exc)e !is null) return true; return false; } return !Exceptions.length; } } unittest { assert(throws!(OtherException)(sqr(-5))); assert(throws!(OtherException)( sqr(-5) )); } void main() {} But I have to wrap it into another assert like that if I want it to behave as an assert inside the unittests. With a bit more compiler support it can be possible to write that in a library. While the static throws can require more compiler support. Bye, bearophile
May 05 2010
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-05 05:10:05 -0400, bearophile <bearophileHUGS lycos.com> said:

 unittest {
     assert(throws!(OtherException)(sqr(-5)));
     assert(throws!(OtherException)( sqr(-5) ));
 }
 While the static throws can require more compiler support.
Can't you do: static assert(throws!OtherException(sqrt(-5))); ? -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 05 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Michel Fortin:

 Can't you do:
 	static assert(throws!OtherException(sqrt(-5)));
 ?
Some of those examples of mine were wrong because the main purpose of a 'static throws' is to catch static asserts, not to catch exceptions at compile time, sorry: static throws(foo(10), StaticException1); static throws(foo(10), StaticException1, StaticException2, ...); static throws!(StaticException1)(foo(10)); static throws!(StaticException1, StaticException2)(foo(10)); (And currently try-catch statements are not supported in CTFE, see http://d.puremagic.com/issues/show_bug.cgi?id=4067 ). Bye, bearophile
May 05 2010
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-05 07:46:46 -0400, bearophile <bearophileHUGS lycos.com> said:

 Michel Fortin:
 
 Can't you do:
 	static assert(throws!OtherException(sqrt(-5)));
 ?
Some of those examples of mine were wrong because the main purpose of a 'static throws' is to catch static asserts, not to catch exceptions at compile time, sorry:
Am I right that what you want is this? static assert(!__traits(compiles, foo(10))); I agree that the __traits syntax leaves a lot of room for improvement.
 (And currently try-catch statements are not supported in CTFE, see 
 http://d.puremagic.com/issues/show_bug.cgi?id=4067 ).
Indeed. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 05 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Michel Fortin:

 Am I right that what you want is this?
 	static assert(!__traits(compiles, foo(10)));
I want a nice syntax that statically asserts if foo(10) doesn't statically asserts :-) What you have written is close (in D1 I have used the is() syntax for a similar purpose), but beside being syntactically ugly, it doesn't specifically detect static asserts inside foo. Bye, bearophile
May 05 2010
parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 Michel Fortin:
 
 Am I right that what you want is this?
 	static assert(!__traits(compiles, foo(10)));
I want a nice syntax that statically asserts if foo(10) doesn't statically asserts :-) What you have written is close (in D1 I have used the is() syntax for a similar purpose), but beside being syntactically ugly, it doesn't specifically detect static asserts inside foo.
It seems pretty useless to me. Wanting to make a distinction between "this will not compile" and "this will not compile _because it hits a static assert_" is an *extremely* niche feature. Because the only times I can imagine that you'd care would be because you wanted to ensure it gave a "nice" error message. And to do that, you'd have to actually check the test of the static assert. And the idea of creating a whole heirarchy of compile-time exceptions for this ...
May 05 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Don:

It seems pretty useless to me. Wanting to make a distinction between "this will
not compile" and "this will not compile _because it hits a static assert_" is
an *extremely* niche feature. Because the only times I can imagine that you'd
care would be because you wanted to ensure it gave a "nice" error message. And
to do that, you'd have to actually check the test of the static assert.<
Let's assume you are right, that making such distinction is generally useless. In my unittests I write one or two tests every time something contains a static assert (plus other tests for other compile time errors). So even if you are right, I have to use a syntax like this often enough inside unittests: static assert(!__traits(compiles, foo(10))); So for this I'd like a simpler syntax.
And the idea of creating a whole heirarchy of compile-time exceptions for this
...<
No hierarchy required, sorry, the examples I have written for 'static throws' were all wrong. Thank you for your answers, bye, bearophile
May 05 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 I have to use a syntax like this often enough inside
 unittests:
 
 static assert(!__traits(compiles, foo(10)));
But why? Just use: foo(10);
May 05 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 But why? Just use: 
     foo(10);
I think you are missing something important here. Static asserts (or other implicit errors) inside a template, etc, test that some input types are correct, some template input values are in the correct range, etc. In this thread we are talking about unittests. The purpose of a unit Inside a unit test is to test that something that can be called Foo works as specified. Working as specified means such Foo must return the correct outputs when the inputs are in its intended range of possible inputs (otherwise the unittest has to fail), and it must produce a compile time assert, run time assert, or throw an exception if the input values are outside the allowed ones (otherwise the unittest has to fail again). So the purpose of the feature I am talking here is for the group of those unittests, to make sure something asserts at compile time (or otherwise doesn't compile) when the compile-time inputs are wrong. So I need something that inside the unittest asserts at compile time if Foo does not asserts at compile-time (or otherwise refuses to work). Bye, bearophile
May 05 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 But why? Just use: foo(10);
I think you are missing something important here. Static asserts (or other implicit errors) inside a template, etc, test that some input types are correct, some template input values are in the correct range, etc. In this thread we are talking about unittests. The purpose of a unit Inside a unit test is to test that something that can be called Foo works as specified. Working as specified means such Foo must return the correct outputs when the inputs are in its intended range of possible inputs (otherwise the unittest has to fail), and it must produce a compile time assert, run time assert, or throw an exception if the input values are outside the allowed ones (otherwise the unittest has to fail again). So the purpose of the feature I am talking here is for the group of those unittests, to make sure something asserts at compile time (or otherwise doesn't compile) when the compile-time inputs are wrong. So I need something that inside the unittest asserts at compile time if Foo does not asserts at compile-time (or otherwise refuses to work).
I'm sorry, I simply don't understand this. If you want to test that something compiles in a unit test, just write the code. The compiler will let you know if it doesn't compile.
May 05 2010
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Walter Bright wrote:

 bearophile wrote:
 Walter Bright:
 But why? Just use: foo(10);
I think you are missing something important here. Static asserts (or other implicit errors) inside a template, etc, test that some input types are correct, some template input values are in the correct range, etc. In this thread we are talking about unittests. The purpose of a unit Inside a unit test is to test that something that can be called Foo works as specified. Working as specified means such Foo must return the correct outputs when the inputs are in its intended range of possible inputs (otherwise the unittest has to fail), and it must produce a compile time assert, run time assert, or throw an exception if the input values are outside the allowed ones (otherwise the unittest has to fail again). So the purpose of the feature I am talking here is for the group of those unittests, to make sure something asserts at compile time (or otherwise doesn't compile) when the compile-time inputs are wrong. So I need something that inside the unittest asserts at compile time if Foo does not asserts at compile-time (or otherwise refuses to work).
I'm sorry, I simply don't understand this. If you want to test that something compiles in a unit test, just write the code. The compiler will let you know if it doesn't compile.
He wants the opposite: to test that something detects an error correctly, making sure his template doesn't silently compile wrong code.
May 05 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Lutger wrote:
 He wants the opposite: to test that something detects an error correctly, 
 making sure his template doesn't silently compile wrong code. 
I don't see that in the example given.
May 05 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright>I don't see that in the example given.<

I can try with one real example from my dlibs1. This template is the third I
use with foreach to create the fake static foreach, this accepts a step
(stride) value too.

But such step can't be zero, so I add a static assert inside it to make sure
you don't try to instantiate it with a step = 0.


template Range(int start, int stop, int step) {
    static assert(step != 0, "Range: step must be != 0");

    static if (step > 0) {
        static if (stop <= start)
            alias Tuple!() Range;
        else
            alias Tuple!(Range!(start, stop-step, step), stop-step) Range;
    } else {
        static if (stop >= start)
            alias Tuple!() Range;
        else
            alias Tuple!(Range!(start, stop-step, step), stop-step) Range;
    }
}


In D2 I can use a template constraint instead of a static assert here (this is
sometimes bad, because you lose the error text message given by the static
assert, that in complex situations can be useful):

template Range(int start, int stop, int step) if (step != 0) {


Now that Range needs unittests, things like this that test that Range gives the
right output:

unittest { // Tests of Range!()
    int[] a;
    foreach (n; Range!(0, 10, 2))
        a ~= n;
    assert(a == [0, 2, 4, 6, 8]);
}


But the "specs" of the Range template say that Range must not work if step is
zero (this means that the unittest has to fail if Range compiles when the given
step is zero). So I have to test this too. I can do that with one more unittest
(D1 code):

    static assert(!is(typeof( Range!(15, 3, 0) )));

In D2 I can use the __traits:

    static assert(!__traits(compiles, Range!(15, 3, 0) ));

Hope this helps.

Bye,
bearophile
May 05 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:

 
 But the "specs" of the Range template say that Range must not work if step is
zero (this means that the unittest has to fail if Range compiles when the given
step is zero). So I have to test this too. I can do that with one more unittest
(D1 code):
 
     static assert(!is(typeof( Range!(15, 3, 0) )));
 
 In D2 I can use the __traits:
 
     static assert(!__traits(compiles, Range!(15, 3, 0) ));
Oh, I see. You want to ensure it does not compile, rather than it compiles. I got the ! flipped around.
May 05 2010
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-05 20:25:45 -0400, Walter Bright <newshound1 digitalmars.com> said:

 bearophile wrote:
 
 
 But the "specs" of the Range template say that Range must not work if 
 step is zero (this means that the unittest has to fail if Range 
 compiles when the given step is zero). So I have to test this too. I 
 can do that with one more unittest (D1 code):
 
     static assert(!is(typeof( Range!(15, 3, 0) )));
 
 In D2 I can use the __traits:
 
     static assert(!__traits(compiles, Range!(15, 3, 0) ));
Oh, I see. You want to ensure it does not compile, rather than it compiles. I got the ! flipped around.
If even Walter has difficulty figuring out the ! around __traits, I'll take that as the ultimate proof that the current syntax has too much cruft and is in need of a cleanup. Could we at least replace this: __traits(compiles, ...) with this: __traits.compiles(...) It looks more readable to me at least, and it can be applied to other traits. The next step would be to find a way to remove that ugly __traits keyword, ideally without stealing a useful identifier. Perhaps it should be made possible to do this: module std.traits; alias __traits.compiles compiles; Now you just import std.traits and never write __traits again! :-) static assert(!compiles( Range!(15, 3, 0) )); -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 05 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Michel Fortin wrote:
 If even Walter has difficulty figuring out the ! around __traits,
Missing a ! is always a problem. People even do not see the word "not".
May 05 2010
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-05 21:12:47 -0400, Walter Bright <newshound1 digitalmars.com> said:

 Michel Fortin wrote:
 If even Walter has difficulty figuring out the ! around __traits,
Missing a ! is always a problem. People even do not see the word "not".
It was an exaggeration. That "ultimate proof" wasn't meant to be too serious. I hope it didn't stop you from reading the rest of the post, because it seemed like a nice idea to improve the __trait syntax. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 05 2010
next sibling parent reply Don <nospam nospam.com> writes:
Michel Fortin wrote:
 On 2010-05-05 21:12:47 -0400, Walter Bright <newshound1 digitalmars.com> 
 said:
 
 Michel Fortin wrote:
 If even Walter has difficulty figuring out the ! around __traits,
Missing a ! is always a problem. People even do not see the word "not".
It was an exaggeration. That "ultimate proof" wasn't meant to be too serious. I hope it didn't stop you from reading the rest of the post, because it seemed like a nice idea to improve the __trait syntax.
http://d.puremagic.com/issues/show_bug.cgi?id=3702
May 05 2010
parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Thu, 06 May 2010 04:14:27 +0200, Don wrote:
 Michel Fortin wrote:
 I hope it didn't stop you from reading the rest of the post, because it
 seemed like a nice idea to improve the __trait syntax.
 
http://d.puremagic.com/issues/show_bug.cgi?id=3702
This is now the fourth highest voted bug in Bugzilla. Just sayin'... ;) -Lars
May 05 2010
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Michel Fortin, el  5 de mayo a las 22:12 me escribiste:
 On 2010-05-05 21:12:47 -0400, Walter Bright <newshound1 digitalmars.com> said:
 
Michel Fortin wrote:
If even Walter has difficulty figuring out the ! around __traits,
Missing a ! is always a problem. People even do not see the word "not".
It was an exaggeration. That "ultimate proof" wasn't meant to be too serious. I hope it didn't stop you from reading the rest of the post, because it seemed like a nice idea to improve the __trait syntax.
Please, remove the leading __ from features that are standard (as oposed from implementation extensions). Is really ugly and make you feel it won't work in another compiler! This goes too for __gshared (and I don't remember if there is anything else). -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Un barco con animales, amigos míos. Creyendo salvarse de la inundación. Mientras yo, sufro la ruptura de mi corazón.
May 06 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Leandro Lucarella:
 Please, remove the leading __ from features that are standard (as oposed
 from implementation extensions). 
I appreciate Walter's decision to use those __names: it allows us to use and try a feature now, it allows to create its implementation progressively, and gives time to design its user interface well after it's being used for some time. Bye, bearophile
May 06 2010
parent Leandro Lucarella <llucax gmail.com> writes:
bearophile, el  6 de mayo a las 12:43 me escribiste:
 Leandro Lucarella:
 Please, remove the leading __ from features that are standard (as oposed
 from implementation extensions). 
I appreciate Walter's decision to use those __names: it allows us to use and try a feature now, it allows to create its implementation progressively, and gives time to design its user interface well after it's being used for some time.
Yes, but that are now all established features. It's OK to throw them in the wild for the first time, but when they prove themselves successful and re here to stay, and the language is close to be finished, I think it makes no sense to leave the leading __, when it has a long history of being used for non-standard extensions (not to mention the uglyness that they bring to the code). -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- El discman vuelve locos a los controles, te lleva a cualquier lugar. Ajústense pronto los cinturones, nos vamos a estrellar. Evidentemente, no escuchaste el speech, que dio la azafata, antes de despegar.
May 06 2010
prev sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright wrote:
 Michel Fortin wrote:
 If even Walter has difficulty figuring out the ! around __traits,
Missing a ! is always a problem. People even do not see the word "not".
I think the problem is confusing this: static assert(!__traits(compiles, Range!(15, 3, 0) )); with this: static assert!(__traits(compiles, Range!(15, 3, 0) )); Probably a little obvious in this case, but for this: foo!(bar, baz) foo(!bar, baz) it gets trickier... hmmm... very bug prone :-P
May 07 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Michel Fortin:
 [...] I'll take that as the ultimate proof that [...]
It's not an ultimate proof. Bye, bearophile
May 05 2010
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wed, May 05, 2010 at 11:04:13AM -0700, Walter Bright wrote:
 But why? Just use:
 
    foo(10);
I used the static assert not compiles thing for the octal template - unittest{ static assert(!__traits(compiles, octal!"not a number")); // and checking that it doesn't convert implicitly long a; static assert(!__traits(compiles, a = octal!"7777777777777"); } That kind of thing - makes sure it gave the right types for the right input. So I see the use, but I don't think any special syntax is required. This works well enough. -- Adam D. Ruppe http://arsdnet.net
May 05 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:
 	static assert(!__traits(compiles, octal!"not a number"));
 [...]
 So I see the use, but I don't think any special syntax is required. This
 works well enough.
Improving that syntax is not necessary, but in my dlibs1 I have to use something like that (using is()) for about 70 KB of templates, it gets tiring and it's easy to make mistakes :-) So even if it's not necessary, it can be handy for me. My post about unit test features contains other things worth discussing about. This was just the first point :-) Bye, bearophile
May 05 2010
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-05 14:24:37 -0400, bearophile <bearophileHUGS lycos.com> said:

 My post about unit test features contains other things worth discussing 
 about. This was just the first point :-)
Indeed. I agree it'd be handy to have named unit tests. I see two reasons to use named unit tests: 1. to print a list of all the tests as they run 2. to print a list of the tests that fails Currently the output of a unittest is to print the first assertion that fails. I think that's a good default, but if you add a name to your unit test it can give you more context when it fails. Here's a way to do it with the current syntax: unittest { scope (failure) writeln("Calculate pi using method X: FAIL"); auto result = methodX(); assert(result == 3.1416); } If the test fails, you'll get this output: Calculate pi using method X: FAIL file.d(5): assertion failure pi == 3.1416 which is a noticeable improvement because you don't have to go and look at the file to know what this test is about. If the test pass, it'll will output nothing. Whether we want to output a line for every test, I'm not sure. D encourages small unit tests scattered all around the place, and I'd be worried that being too verbose when the tests are run would discourage people from running the tests in the first place. On the other hand, it's useful, especially when a test hangs, takes too long, or crashes the program, to be able to see the list of all the tests as they run. So what I would suggest is two things. A way to attach a name to a unit test, like this: unittest "Calculate pi using method x" { ... } This is better than scope (failure) because the test runner is now in charge of deciding what to do with each test. I'd suggest that the runtime print the name of a test when it fails: Calculate pi using method X: FAIL file.d(5): assertion failure pi == 3.1416 If the environment variable D_VERBOSE_UNITTEST is set when the program is run, the runtime could print the name of each test as it executes the test, followed by "PASS" upon successful completion or "FAIL" on failure: Calculate e using method X: PASS Calculate pi using method X: FAIL file.d(5): assertion failure pi == 3.1416 The environment variable makes sure that no one is bothered by a long list of tests unless they explicitly ask for it. When you want to see which tests are run and get a general feel of the time they take. Any other use for named unit tests? -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 05 2010
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-05 14:04:13 -0400, Walter Bright <newshound1 digitalmars.com> said:

 bearophile wrote:
 I have to use a syntax like this often enough inside
 unittests:
 
 static assert(!__traits(compiles, foo(10)));
But why? Just use: foo(10);
To put it more simply than bearophile: foo(10); // gives you an error when foo(10) does *not* compile static assert(!__traits(compiles, foo(10))); // gives you an error when foo(10) compiles with no error // (notice the negation operator "!") It is sometime a good idea to assert that a template cannot be instantiated with bogus arguments. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 05 2010
parent dennis luehring <dl.soluz gmx.net> writes:
 It is sometime a good idea to assert that a template cannot be
 instantiated with bogus arguments.
or to be more generic - do not only test your untis with "working" scenarios, the "non-working" are also part of the test game :-)
May 05 2010
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 dmd D 2.045 improves the built-in unit tests resuming their run when they fail
(it reports only the first failed assert for each unit test).
 
 There are many features that today a professional unittest system is expected
to offer, I can write a long list. But in the past I have explained that it's a
wrong idea to try to implement all those things in dmd.
 
 So a good solution that has all the advantages is:
 - To add dmd the "core" features that are both important and hard to implement
nicely in an external library or IDE (or make D more flexible so writing such
libs is possible, but this can be not easy).
 - To add dmd the compile-time reflection, run-time reflection or hooks that
external unittest libs/IDEs can use to extend the built-in unit testing
functionality.
 
 It's not easy to find such core features (that can be used by an IDE, but are
usable from the normal command line too), this is my first try, and I can be
wrong. Feel free to add items you think are necessary, or to remove items you
know can be implemented nicely in an external library. Later I can write an
enhancement request.
I think the majority of the items in your list can already be done fairly well (or easily enough by a library), except for giving names to unit tests. One thing which you've not mentioned is in unittests for interfaces (and derived classes). I would like to be able to check that *all* implementations of an interface satisfy a sequence of tests.
May 05 2010
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Don wrote:

 bearophile wrote:
 dmd D 2.045 improves the built-in unit tests resuming their run when they
 fail (it reports only the first failed assert for each unit test).
 
 There are many features that today a professional unittest system is
 expected to offer, I can write a long list. But in the past I have
 explained that it's a wrong idea to try to implement all those things in
 dmd.
 
 So a good solution that has all the advantages is:
 - To add dmd the "core" features that are both important and hard to
 implement nicely in an external library or IDE (or make D more flexible so
 writing such libs is possible, but this can be not easy). - To add dmd the
 compile-time reflection, run-time reflection or hooks that external
 unittest libs/IDEs can use to extend the built-in unit testing
 functionality.
 
 It's not easy to find such core features (that can be used by an IDE, but
 are usable from the normal command line too), this is my first try, and I
 can be wrong. Feel free to add items you think are necessary, or to remove
 items you know can be implemented nicely in an external library. Later I
 can write an enhancement request.
I think the majority of the items in your list can already be done fairly well (or easily enough by a library), except for giving names to unit tests. One thing which you've not mentioned is in unittests for interfaces (and derived classes). I would like to be able to check that *all* implementations of an interface satisfy a sequence of tests.
Do you mean something like parameters for unittests?
May 05 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Don:

I think the majority of the items in your list can already be done fairly well
(or easily enough by a library), except for giving names to unit tests.<
Let's see the points: 1) The following syntax is compatibile with D2 to implement the two parts of point 1: assert(throws!(Ex1, Ex2)(foo(10))); static assert(!__traits(compiles, foo(10))); This works well enough, but the second syntax is both common enough in my code and ugly enough (ugly also means bug-prone) that some syntax improvement can be good. Future syntax like this can improve the situation a bit: static assert(!meta.compiles(foo(10))); 3) Currently unittest errors report only the module name and line number. If unittests get a name, but such name isn't shown up in the unittest errors, the IDE has to parse the source code to know what's the name of the failed unittest (if the IDE can't know this name, then there is little point in giving names to unittests in the first place). This is bad, so I think this tiny feature has to be built-in. 4) I think that currently the JSON ignores unittests (http://dsource.org/projects/dmd/changeset/453 ). For the IDE it's useful to know what unittests are present in a module, for example to compute the percentage of failed unittests, etc. If the JSON doesn't list them, then the IDE has to parse the code to find them by itself. This is doable, but the JSON was invented to avoid this. So I think this little feature has to be built-in. 5) This is not so important, the compiler can just ignore the unittest ddocs. The IDE can find them (knowing the starting line of the unittests) and use them as necessary to answer the user questions. Putting such unittest ddocs in the JSON can simplify the IDE a little, but this is not a vital thing. I think this is simple enough to implement that it's better for this to be built-in. 6) All serious unit test systems have several ways to disable unittests, this is a very common need. And a way to group them is often a need. D language has version(){} statements, and debug(){} too, they can be used to enable and disable unittests with no need of extra syntax support. With enough code to juggle defined and undefined symbols you can even support groups of unittests. But this is true for the debug() too, you can emulate the purpose of debug() given version() or version() given debug(). Some syntax support can turn something ad hoc and hairy in something simpler and more elegant. So some support for disabling unittests and grouping them can be useful. The three half-baked things need more thinking, I invite people to give opinions and suggestions. B) I don't know if this is necessary. C) This is nice, but I don't know if this is worth the complexity to implement it. Opinions welcome.
One thing which you've not mentioned is in unittests for interfaces (and
derived classes). I would like to be able to check that *all* implementations
of an interface satisfy a sequence of tests.<
A "sequence of tests" can mean a group of tests, see point 6. So what you ask for can be seen as related to the point C, to associate a group of tests to a interface (and all its implementation tree). A possible way to implement this is with an attribute: dotest(group1) interface Foo {...} // association interface <=> unittest group dotest(test1) int bar(int x) {...} // association function <=> unittest I am not sure if this is a good way to implement this idea. Bye, bearophile
May 05 2010
parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 Don:
 One thing which you've not mentioned is in unittests for interfaces (and
derived classes). I would like to be able to check that *all* implementations
of an interface satisfy a sequence of tests.<
A "sequence of tests" can mean a group of tests, see point 6. So what you ask for can be seen as related to the point C, to associate a group of tests to a interface (and all its implementation tree). A possible way to implement this is with an attribute:
No, it's not related to point C in any way. The "sequence of tests" was not important. I should have said "one or more tests". Also an attribute is a terrible way to implement almost anything -- you need a really good reason to add a new attribute. My point could be solved with some level of run-time reflection. Or perhaps by allowing unittest{} as a (static) interface member.
May 05 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Don:

I didn't mean to put my words in your mouth, sorry. I'll try to be more careful.

Also an attribute is a terrible way to implement almost anything -- you need a
really good reason to add a new attribute.<
You have noticed that some of the things I have proposed in the last months use attributes. I don't know why attributes are so bad, I like the idea of Bye, bearophile
May 06 2010
parent Don <nospam nospam.com> writes:
bearophile wrote:
 Don:
 
 I didn't mean to put my words in your mouth, sorry. I'll try to be more
careful.
 
 Also an attribute is a terrible way to implement almost anything -- you need a
really good reason to add a new attribute.<
You have noticed that some of the things I have proposed in the last months use attributes. I don't know why attributes are so bad, I like the idea of Bye, bearophile
The thing is, they're not really any different to keywords in how difficult they make the language to learn, in how much complexity they add to the compiler, and in how much complexity they add to the spec. We could even add an to the start of every existing keyword, and then argue that the language has no keywords at all! So we still need to be reluctant to add any new ones.
May 06 2010
prev sibling parent BCS <none anon.com> writes:
Hello Don,

 My point could be solved with some level of run-time reflection. Or
 perhaps by allowing unittest{} as a (static) interface member.
Might something ad-hoc like this work? interface Foo { ... } class TestFoo(T) if(is(T : Foo)) { unittest { } } -- ... <IXOYE><
May 06 2010
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Don <nospam nospam.com> wrote:
 bearophile wrote:
 dmd D 2.045 improves the built-in unit tests resuming their run when
 they fail (it reports only the first failed assert for each unit
 test).
 There are many features that today a professional unittest system
 is expected to offer, I can write a long list. But in the past I
 have explained that it's a wrong idea to try to implement all
 those things in dmd.
 So a good solution that has all the advantages is:
- To add dmd the "core" features that are both important and hard to implement nicely in an external library or IDE (or make D more flexible so writing such libs is possible, but this can be not easy). - To add dmd the compile-time reflection, run-time reflection or hooks that external unittest libs/IDEs can use to extend the built-in unit testing functionality.
 It's not easy to find such core features (that can be used by an
 IDE, but are usable from the normal command line too), this is my
 first try, and I can be wrong. Feel free to add items you think
 are necessary, or to remove items you know can be implemented
 nicely in an external library. Later I can write an enhancement
 request.
I think the majority of the items in your list can already be done fairly well (or easily enough by a library), except for giving names to unit tests.
A while back I proposed: unittest("name") {} As an optional way to name a unittest. It's the last function-like thing without parens anyway.
May 08 2010
next sibling parent Jason House <jason.james.house gmail.com> writes:
Sean Kelly Wrote:

 Don <nospam nospam.com> wrote:
 bearophile wrote:
 dmd D 2.045 improves the built-in unit tests resuming their run when
 they fail (it reports only the first failed assert for each unit
 test).
 There are many features that today a professional unittest system
 is expected to offer, I can write a long list. But in the past I
 have explained that it's a wrong idea to try to implement all
 those things in dmd.
 So a good solution that has all the advantages is:
- To add dmd the "core" features that are both important and hard to implement nicely in an external library or IDE (or make D more flexible so writing such libs is possible, but this can be not easy). - To add dmd the compile-time reflection, run-time reflection or hooks that external unittest libs/IDEs can use to extend the built-in unit testing functionality.
 It's not easy to find such core features (that can be used by an
 IDE, but are usable from the normal command line too), this is my
 first try, and I can be wrong. Feel free to add items you think
 are necessary, or to remove items you know can be implemented
 nicely in an external library. Later I can write an enhancement
 request.
I think the majority of the items in your list can already be done fairly well (or easily enough by a library), except for giving names to unit tests.
A while back I proposed: unittest("name") {} As an optional way to name a unittest. It's the last function-like thing without parens anyway.
I think it's been repeatedly proposed. One contributor to my open source project created a function test(string name, void delegate() what)... It would allow the program to continue after a failed unit test, but would print out the names of the failing tests.
May 08 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Sean Kelly:
 A while back I proposed:
 unittest("name") {}
Now I think this syntax is a little better: unittest name {} Bye, bearophile
May 08 2010
prev sibling next sibling parent reply =?iso-8859-2?B?VG9tZWsgU293afFza2k=?= <just ask.me> writes:
Dnia 05-05-2010 o 03:24:50 bearophile <bearophileHUGS lycos.com>  =

napisa=B3(a):

[snip]
 2) Names for unittests. Giving names to things in the universe is a  =
 first essential step if you want to try to understand some part of it.=
=
 The compiler makes sure in each module two unittest tests don't share =
=
 the same name. An example:

 int sqr(int x) { return 2 * x; }

 /// asserts that it doesn't return a negative value
 unittest(sqr) {
     assert(sqr(10) >=3D 0);
     assert(sqr(-10) >=3D 0);
 }
I think it's achievable through mixin templates: mixin Unittest!("sqr", { assert(sqr(10) >=3D 0); assert(sqr(-10) >=3D 0); }); and even doesn't poke your eyes like most mixin code:)
 3) Each unittest error has to say the (optional) name of the unit test=
s =
 it is contained into. For example:

 test4(sqr,6): unittest failure
If Unittest template** above would be something like: mixin template Unittest(string name, alias fun) { mixin(" void "~name~"() { fun(); } unittest { writeln(\"Testing "~name~"...\"); "~name~"(); } "); } then you would see the name of the unit test in the stack trace.
 4) The dmd JSON output has to list all the unitttests, with their  =
 optional name (because the IDE can use this information to do many  =
 important things).
This is interesting. How the IDE may help knowing the unittest name?
 5) Optional ddoc text for unittests (to allow IDEs to answer the  =
 programmer the purpose of a  specific test that has failed).
When a test has failed I go to the testing code to see what's up. So I'l= l = see any non-ddoc comments someone left there.
 6a) A way to enable only unittests of a module. Because in a project  =
 there are several modules, and when I work on a module I often want to=
=
 run only its unittests. In general it's quite useful to be able to  =
 disable unittests.
... or in a package, or one folder above the package level, or ... ;) Versioning and conditional compilation are the right tools for the job.
 6b) A way to define groups of unittests (that cross modules too):  =
 because you sometimes want to unittest for a feature spread in more th=
an =
 one module, or you want to tell apart quick and slow unittests, to run=
=
 fast unittests often and slow ones only once in a while.

 One way to support unittest groups is to allow for tag names after the=
=
 unittest name that define overlapping groups:

 unittest(foo_name, group1_tag_name, group2_tag_name, ...) {...}
It's more of a version statement issue: can't have many versions or'ed = together in a single condition. Then again, this is possible now: template isVersion(string name) { enum bool isVersion =3D !is(typeof({ mixin("version("~name~") { static assert(false); }"); })); } static if (isVersion!"group1_tag_name" || isVersion!"group2_tag_name" ||= = ... ) mixin Unittest!("foo_name", {...}); [snip]
 A) Serious unittest system needs a way to allow sharing of setup and  =
 shutdown code for tests.
How about: unittest { setUp(); scope(exit) tearDown(); ... };
 Fixtures can be supported at the package, module, class and function  =
 level. Setup always runs before any test (or groups of tests).
Unittests are executed in declaration order, so: unittest { setUp(); } unittest { ... } unittest { ... } unittest { tearDown(); } In a nutshell, I agree with you that unittests would use more features b= ut = we should explore a (standard?) library solution first, before putting = more metal into the language/compiler. Tomek ** Actually, it doesn't compile now: mixin Unittest!("my_test", { assert (2 + 2 =3D=3D 4); }); Error: delegate test.__dgliteral2 is a nested function and cannot be = accessed from my_test I'd appreciate if someone elaborated on why exactly it cannot be accesse= d = from my_test. To me, if a delegate literal is defined in the global sco= pe, = then, with respect to nesting, it's not much different than a normal = function, no?
May 05 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Tomek S.:

and even doesn't poke your eyes like most mixin code:)<
Thanks, but no thanks. String mixin hacks don't help, they make the situation worse.
then you would see the name of the unit test in the stack trace.<
I'd love to see the stack trace with stock dmd :-) I don't know/remember why standard dmd has never adopted the old "phobos hack" to show the stack trace.
This is interesting. How the IDE may help knowing the unittest name?<
What's the point in giving a name to a unittest? If the unittests have a name, and unittest failures show names too, but the IDE doesn't know all the names of the unittests of a module, then the IDE can't use those names well: - If the IDE knows the names of the unittests of a module it also knows how many unittests are present, so it can give the percentage of failed unittests. - IDEs show lists of unittests, with red and green symbols beside them. If the IDE doesn't know the all names of the unittests, it can show only the unittests with red symbols :-) - If the IDE doesn't knows the names of the unittests of a module, it can't compute other statistics commonly useful for higher level management of tests, like the test coverage of a project, percentage of tests written by a programmer, and so on.
When a test has failed I go to the testing code to see what's up. So I'll see
any non-ddoc comments someone left there.<
Of course that's a possibility. But IDEs usually show a Window that lists the unittests, the failed and passed ones, etc. You can move the mouse cursor on a unittest and the IDE can show you its ddoc, no need to jump to the code. As I have said this is not an essential feature, but nothing inside the JSON files is essential to program in D, it's a possible help for external tools.
... or in a package, or one folder above the package level, or ... ;)<
Right. The three main use cases are probably: - to disable an unittetest (it can be done with disable applied on an unitttes); - activating unittests only of a module (the one currently worked on); - activating only a group of unittests.
Versioning and conditional compilation are the right tools for the job.<
This works, but: - The code to do it can be a bit messy. (If I have to do something many times always in the same way, then I prefer clean code that clearly shows my semantics. This is why D has both version() and debug() in the first place). - If you are _not_ using an IDE it can become not handy to manage it. Let's say I want to run the unittests only of the module foobar. I have to put before all unittests code like: version(modulename) unittest(foo) {...} Then if I change the module name I have to change them all.
 static if (isVersion!"group1_tag_name" || isVersion!"group2_tag_name" ||  
 ... )
 mixin Unittest!("foo_name", {...});
That's a bad solution.
In a nutshell, I agree with you that unittests would use more features but we
should explore a (standard?) library solution first, before putting more metal
into the language/compiler.<
In that list I have not asked for very complex features. One of the main points of my post was to see if those things can be implemented well in the current language. Bye and thank you for your answers, bearophile
May 05 2010
prev sibling next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
bearophile wrote:

 dmd D 2.045 improves the built-in unit tests resuming their run when they
 fail (it reports only the first failed assert for each unit test).
 
 There are many features that today a professional unittest system is
 expected to offer, I can write a long list. But in the past I have
 explained that it's a wrong idea to try to implement all those things in
 dmd.
 
 So a good solution that has all the advantages is:
 - To add dmd the "core" features that are both important and hard to
 implement nicely in an external library or IDE (or make D more flexible so
 writing such libs is possible, but this can be not easy). - To add dmd the
 compile-time reflection, run-time reflection or hooks that external
 unittest libs/IDEs can use to extend the built-in unit testing
 functionality.
 
 It's not easy to find such core features (that can be used by an IDE, but
 are usable from the normal command line too), this is my first try, and I
 can be wrong. Feel free to add items you think are necessary, or to remove
 items you know can be implemented nicely in an external library. Later I
 can write an enhancement request.
I think that most of the features you mention can be implemented in a library, but at some cost. For example, tests get more verbose. Or for example with the string mixin syntax with unaryFun and binaryFun you are limited to the parameters and symbols from a select few phobos modules. There is something to be said for the simplicity of how D natively handles unittesting I think. Perhaps some issues with local template instantiation and / or mixin visibilty can be sorted out to improve this, I'm not sure - it's a bit above my head. Anyway I think if unittests can get a name as a parameter and dmd let's the user set the AssertError handler, that will be a sufficient hook to provide some useful user extension of the unittest system. I'll post some more specific comments below:
 ---------------------
 
 1) It's very useful to have a way to catch static asserts too, because
 templates and other things can contain static asserts too, that for example
 can be used to test if input types or constants are correct. When I write
 unittests for those templates I want to test that they actually statically
 asserts when I use them in a wrong way.
 
 A possible syntax (this has to act like static assert):
 static throws(foo(10), StaticException1);
 static throws(foo(10), StaticException1, StaticException2, ...);
 
 A version that catches run-time asserts (this has to act like asserts):
 throws(foo(10), Exception1);
 throws(foo(10), Exception1, Exception2, ...);
 
 
 There are ways to partially implement this for run-time asserts, but badly:
 
 void throws(Exceptions, TCallable, string filename=__FILE__, int
 line=__LINE__)
            (lazy TCallable callable) {
     try
         callable();
     catch (Exception e) {
         if (cast(Exceptions)e !is null)
             return;
     }
 
     assert(0, text(filename, "(", line, "): doesn't throw any of the
     specified exceptions."));
 }
I have set up a unittesting system in a hacky way that does something like this, but instead of asserting it just prints the error and collects the test result in a global storage. When the program ends this gets written to a json file. Seems to work well enough, what do you think the above lacks? I have wrapped it something like this: expectEx!SomeException(foo(10)); I have also done this for compile time assertions, but it is more limited: int a; expectCompileError!(isSorted, a); or: expectCompileError!(q{ isSorted(a) }, a); One cool thing of D2 however is that you can get the exact name and value of an alias parameter by local instantiation, for example this test: int[] numbers = [3,2,1,7]; expect!( isSorted, numbers ); prints: test.d(99) numbers failed: isSorted(alias less = "a < b",Range) if (isForwardRange!(Range)) ( numbers ) :: numbers == [3 2 1 7]
 2) Names for unittests. Giving names to things in the universe is a first
 essential step if you want to try to understand some part of it. The
 compiler makes sure in each module two unittest tests don't share the same
 name. An example:
 
 int sqr(int x) { return 2 * x; }
 
 /// asserts that it doesn't return a negative value
 unittest(sqr) {
     assert(sqr(10) >= 0);
     assert(sqr(-10) >= 0);
 }
I agree, as mentioned I have done this by writing to some global state which records the currently running test, then are 'assertions' write to the same state. If only one thing could improve in the native system I think this should be it.
 ---------------------
 
 3) Each unittest error has to say the (optional) name of the unit tests it
 is contained into. For example:
 
 test4(sqr,6): unittest failure
One tiny tip: test.d(6): unittest sqr failed This way ide's and editors can parse it like regular D errors and jump to the failed test.
 ---------------------
 
 4) The dmd JSON output has to list all the unitttests, with their optional
 name (because the IDE can use this information to do many important
 things).
 
 ---------------------
 
 5) Optional ddoc text for unittests (to allow IDEs to answer the programmer
 the purpose of a  specific test that has failed).
 
 Unittest ddocs don't show up inside the HTML generated with -D because the
 user of the module doesn't need to know the purpose of its unittests. So
 maybe they appear only inside the JSON output.
I think there's a report in bugzilla from Andrei requesting that the unittests themselves can be turned into documentation. Together with preconditions in ddoc, that would seem very useful.
 ---------------------
 
 6a) A way to enable only unittests of a module. Because in a project there
 are several modules, and when I work on a module I often want to run only
 its unittests. In general it's quite useful to be able to disable
 unittests.
Shouldn't this be part of a tool that compiles and runs tests of each module? rdmd has the option --main, together with -unittest you can easily do this. ...
 
 
 Three more half-backed things, if you know how to improve/design this ideas
 you can tell me:
 
 A) Serious unittest system needs a way to allow sharing of setup and
 shutdown code for tests.
 
 From Python unittest: a test fixture is the preparation needed to perform
 one or more tests, and any associate cleanup actions. This may involve, for
 example, creating temporary or proxy databases, directories, or starting a
 server process, etc.
 
 Fixtures can be supported at the package, module, class and function level.
 Setup always runs before any test (or groups of tests).
 
 setUp(): Method called to prepare the test fixture.
 tearDown(): Method called immediately after the test method has been called
 and the result recorded.
I think this is standard xUnit. I have this: class TestSuiteA : Fixture { void setup() { /* initialize */} void teardown() { /* cleanup or restore */ } void test_foo() { /* test ...*/} void test_bar() { /* test ...*/} } // running setup, test_foo, test_bar and finally teardown: unittest { runSuite!TestSuite(); } But it seems like a good idea to separate the fixture from the test suite, so they can easily be reused.
 ---------------------
 
 B) There are situations when you don't want to count how many unittests
 have failed, but you want to fix a bug, with a debugger. For this it can be
 useful a command line switch to turn unittest asserts back into normal
 asserts.
This is possible if you replace the assert statement with your own or perhaps hook up an assertHandler. At the moment this is a bit weird, perhaps it is a bug? unittest { void inner() { assert( false, "bar" ); } assert( false, "foo" ); inner(); assert( false, "baz" ); } This unittest halts with an AssertError inside inner().
 ---------------------
 
 C) I'd like a way to associate a specific unittest to a function, class or
 module, or something else that is being tested, because this can be useful
 in several different ways. But this seems not easy to design. Keep in mind
 that tests can be in a different module. I don't know how to design this,
 or if it can be designed well.
Looks hard to me too. I think I like the idea that unittests should be close to the code it tests, otherwise it is a different kind of test. Perhaps we should consider ddoc, contracts and unittest as much part of the code as the code itself?
 
 Bye,
 bearophile
May 05 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Lutger:

Anyway I think if unittests can get a name as a parameter and dmd let's the
user set the AssertError handler, that will be a sufficient hook to provide
some useful user extension of the unittest system.<
Ah, I didn't think about an AssertError handler. Among other things, it solves my point B. Digging a little, I have seen core.exceptions has already a setAssertHandler, but it's not useful yet: http://d.puremagic.com/issues/show_bug.cgi?id=3208
 One tiny tip: test.d(6): unittest sqr failed
 This way ide's and editors can parse it like regular D errors and jump to the
failed test.
Good.
I think there's a report in bugzilla from Andrei requesting that the unittests
themselves can be turned into documentation.<
Given the size of my unittests I don't think this is good idea in general. Maybe Andrei was trying to add something like python doctests to D. I think to do this well it can be necessary a way to tell apart "documentation doctests" from normal (and often boring or large) unittests.
Shouldn't this be part of a tool that compiles and runs tests of each module?<
Most of the features I have listed are meant for an IDE, while this one is for programmer that uses just the command line with no tools :-) And to enable/disable test groups things gets a little more complex.
rdmd has the option --main, together with -unittest you can easily do this.<
This page doesn't list that option: http://www.digitalmars.com/d/2.0/rdmd.html rdmd prints: --eval=code evaluate code +á la perl -e (multiple --eval allowed) --loop assume "foreach (line; stdin.byLine()) { ... }" for eval --main add a stub main program to the mix (e.g. for unittesting) I don't understand how to use those three switches. Do you know where I can find usage examples for those three?
This unittest halts with an AssertError inside inner().<
That's why my throws!()() has to return a boolean instead of just asserting :-( I don't know if this can be seen as a dmd bug.
I think I like the idea that unittests should be close to the code it tests,
otherwise it is a different kind of test.<
There are strong opionions on this. Some people love this, other people ate it. Some people can appreciate to move large amounts of unittests in separate modules. The best thing is to leave the programmers to put the unittests where they want.
Perhaps we should consider ddoc, contracts and unittest as much part of the
code as the code itself?<
Of course :-) Bye and thank you for your answers and ideas, bearophile
May 05 2010
parent Lutger <lutger.blijdestijn gmail.com> writes:
bearophile wrote:

 Lutger:
...
rdmd has the option --main, together with -unittest you can easily do
this.<
This page doesn't list that option: http://www.digitalmars.com/d/2.0/rdmd.html rdmd prints: --eval=code evaluate code +� la perl -e (multiple --eval allowed) --loop assume "foreach (line; stdin.byLine()) { ... }" for eval --main add a stub main program to the mix (e.g. for unittesting) I don't understand how to use those three switches. Do you know where I can find usage examples for those three?
I don't know. This is how you can unittest a single module: rdmd --main -unittest test.d it just links in a predefined .d file with a main() routine, that's all.
May 05 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Lutger:
 I don't know. This is how you can unittest a single module:
 rdmd --main -unittest test.d
 it just links in a predefined .d file with a main() routine, that's all.
I have done some tests, but I am not able to use it well, or there are some bugs, I don't know. But even if it works, I don't know if that can solve the problem at hand, because modules call each other, and I don't know if that command compiles just one module with -unittest. Bye and sorry, bearophile
May 07 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Lutger:
 I don't know. This is how you can unittest a single module:
 rdmd --main -unittest test.d
 it just links in a predefined .d file with a main() routine, that's all.
I have done some tests, but I am not able to use it well, or there are some bugs, I don't know. But even if it works, I don't know if that can solve the problem at hand, because modules call each other, and I don't know if that command compiles just one module with -unittest. Bye and sorry, bearophile
Passing --chatty to rdmd prints the dmd invocation before executing it. Andrei
May 07 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  7 de mayo a las 06:59 me escribiste:
 bearophile wrote:
Lutger:
I don't know. This is how you can unittest a single module:
rdmd --main -unittest test.d
it just links in a predefined .d file with a main() routine, that's all.
I have done some tests, but I am not able to use it well, or there are some bugs, I don't know. But even if it works, I don't know if that can solve the problem at hand, because modules call each other, and I don't know if that command compiles just one module with -unittest. Bye and sorry, bearophile
Passing --chatty to rdmd prints the dmd invocation before executing it.
I wonder why you always pick clever names, what's wrong with --verbose? -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Hello? Is there anybody in there? Just nod if you can hear me. Is there anyone at home?
May 07 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  7 de mayo a las 06:59 me escribiste:
 bearophile wrote:
 Lutger:
 I don't know. This is how you can unittest a single module:
 rdmd --main -unittest test.d
 it just links in a predefined .d file with a main() routine, that's all.
I have done some tests, but I am not able to use it well, or there are some bugs, I don't know. But even if it works, I don't know if that can solve the problem at hand, because modules call each other, and I don't know if that command compiles just one module with -unittest. Bye and sorry, bearophile
Passing --chatty to rdmd prints the dmd invocation before executing it.
I wonder why you always pick clever names, what's wrong with --verbose?
It was easily confused with dmd's -v. Andrei
May 07 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Michel Fortin:

Sorry for the delay in my answer, Michel.


 1. to print a list of all the tests as they run
 2. to print a list of the tests that fails
For an IDE to find such lists in an easy way it's useful the point 4, to have those names in the JSON.
 Here's a way to do it with the current syntax:
        unittest {
                 scope (failure) writeln("Calculate pi using method X: FAIL");
I didn't know/remember about this, it's nice.
 Whether we want to output a line for every test, I'm not sure.
I like a list of all unittests in the JSON, but I don't need an output line for every test. But I don't need dmd to print this, it can be left to the IDE.
On the other hand, it's useful, especially when a test hangs, takes too long,
or crashes the program, to be able to see the list of all the tests as they
run.<
Some unit tests systems use a short output syntax, dmd can print the same: ....x.....x... Done 14 unit tests in 1.25 seconds, 2 failed.
         unittest "Calculate pi using method x" {
I prefer this syntax (from my original post I have removed the useles parenthesys around the unittest name): /// Calculate pi using method x unittest pi_with_x { ... }
 I'd suggest that the runtime print the name of a test when it fails:
This was point 3 in my original list. Using the suggestions by Lutger ('' symbols added): test.d(6): unittest 'pi_with_x' failed.
 If the environment variable D_VERBOSE_UNITTEST
I prefer normal compiler switches. Now I have received enough answers and I can write a bug report or DEP. The good thing is that all this doesn't require hard or big changes to dmd, just small changes. Bye and thank you, bearophile
May 07 2010