www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DMD unittest fail =?UTF-8?Q?reporting=E2=80=A6?=

reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
=E2=80=A6 is completely hideous, or am I unique in objecting to the mess of
output you get on a test fail?

Yes I get some output that is useful, but the stack trace is stuff I do
not want to know about. Can the stack trace be switched off by default
from DMD version 0 and every version after that?



Does ldc2 have an rdmd mode?


--=20
Russel.
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder ekiga.n=
et
41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
Dec 04 2015
next sibling parent reply ZombineDev <valid_email he.re> writes:
On Friday, 4 December 2015 at 19:00:37 UTC, Russel Winder wrote:
 … is completely hideous, or am I unique in objecting to the 
 mess of output you get on a test fail?
You can look at some of the DUB packages: http://code.dlang.org/search?q=test for more advanced testing facilities. I for example sometimes use dunit (https://github.com/nomad-software/dunit) which has nice test results reporting:
 DUnit by Gary Willoughby.
 -> Running unit tests
 - example
 
 +----------------------------------------------------------------------
 | Failed asserting equal
 +----------------------------------------------------------------------
 | File: example.d
 | Line: 91
 +----------------------------------------------------------------------
 | ✓ Expected value: (int) 1
 | ✗ Actual value: (ulong) 2
By the way, looking at the code, it shouldn't be too hard to write your own test runner: https://github.com/nomad-software/dunit/blob/master/source/dunit/moduleunittester.d?ts=3
Dec 04 2015
parent Atila Neves <atila.neves gmail.com> writes:
On Friday, 4 December 2015 at 19:38:35 UTC, ZombineDev wrote:
 On Friday, 4 December 2015 at 19:00:37 UTC, Russel Winder wrote:
[...]
You can look at some of the DUB packages: http://code.dlang.org/search?q=test for more advanced testing facilities. I for example sometimes use dunit (https://github.com/nomad-software/dunit) which has nice test results reporting:
 [...]
By the way, looking at the code, it shouldn't be too hard to write your own test runner: https://github.com/nomad-software/dunit/blob/master/source/dunit/moduleunittester.d?ts=3
It is if you want to run individual unit tests. Atila
Dec 07 2015
prev sibling next sibling parent Chris Wright <dhasenan gmail.com> writes:
On Fri, 04 Dec 2015 19:00:37 +0000, Russel Winder via Digitalmars-d wrote:

 … is completely hideous, or am I unique in objecting to the mess of
 output you get on a test fail?
You are probably thinking mainly of failing assertions directly in the body of a unittest block, but assertions can happen anywhere, and uncaught exceptions can also fail a unittest. In either of these cases, I do want to see a stacktrace. The problem with stacktraces at the moment is this: core.exception.AssertError source/url.d(925): Assertion failure ---------------- ??:? _d_assert [0x52b10f] ??:? void url.__assert(int) [0x528597] source/url.d:925 immutable(char)[] url.punyDecode(immutable(char)[]) [0x5139b0] source/url.d:989 void url.__unittestL988_12() [0x513c39] ??:? void url.__modtest() [0x528535] ??:? int core.runtime.runModuleUnitTests().__foreachbody2 (object.ModuleInfo*) [0x561c3a] ??:? int object.ModuleInfo.opApply(scope int delegate (object.ModuleInfo*)).__lambda2(immutable(object.ModuleInfo*)) [0x52a8d7] ??:? int rt.minfo.moduleinfos_apply(scope int delegate(immutable (object.ModuleInfo*))).__foreachbody2(ref rt.sections_elf_shared.DSO) [0x53137a] ??:? int rt.sections_elf_shared.DSO.opApply(scope int delegate(ref rt.sections_elf_shared.DSO)) [0x531409] ??:? int rt.minfo.moduleinfos_apply(scope int delegate(immutable (object.ModuleInfo*))) [0x53130b] ??:? int object.ModuleInfo.opApply(scope int delegate(object.ModuleInfo*)) [0x52a8b3] ??:? runModuleUnitTests [0x561aad] ??:? void rt.dmain2._d_run_main(int, char**, extern (C) int function(char [][])*).runAll() [0x52da06] ??:? void rt.dmain2._d_run_main(int, char**, extern (C) int function(char [][])*).tryExec(scope void delegate()) [0x52d9b4] ??:? _d_run_main [0x52d911] ??:? main [0x50ee8f] ??:? __libc_start_main [0xcb7e2ec4] 20 lines of stacktrace. The potentially useful portion of the stacktrace: source/url.d:925 immutable(char)[] url.punyDecode(immutable(char)[]) [0x5139b0] source/url.d:989 void url.__unittestL988_12() [0x513c39] I think a good approximation would be to skip from the head of the stack to the first non-runtime frame, but, if no non-runtime frames exist, emit the whole stacktrace. That wouldn't hide anything in the case of a runtime crash, but it would eliminate about 15 lines of cruft.
Dec 04 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-12-04 20:00, Russel Winder via Digitalmars-d wrote:
 … is completely hideous, or am I unique in objecting to the mess of
 output you get on a test fail?

 Yes I get some output that is useful, but the stack trace is stuff I do
 not want to know about. Can the stack trace be switched off by default
 from DMD version 0 and every version after that?
You don't want a stack trace for a failed unit test? I have never used a unit test framework that don't output the stack trace for a failed unit test. Why would you want that? -- /Jacob Carlborg
Dec 05 2015
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 2015-12-05 at 10:24 +0100, Jacob Carlborg via Digitalmars-d
wrote:
 [=E2=80=A6]
=20
 You don't want a stack trace for a failed unit test? I have never
 used a=20
 unit test framework that don't output the stack trace for a failed
 unit=20
 test. Why would you want that?
I put it the other way round: why do you want a stack trace from a failure of a unit test? The stack trace tells you nothing about the code under test that the test doesn't already tell you. All you need to know is which tests failed and why. This of course requires power asserts or horrible things like assertEqual and the like to know the state that caused the assertion fail. For me, PyTest is the model system here, along with Spock, and ScalaTest. Perhaps also Catch. Just because some unittests have done something in the past doesn't mean it is the right thing to do. The question is what does the programmer need for the task at hand. Stack traces add nothing useful to the analysis of the test pass or fail. I will be looking at dunit, specd and dcheck. The current hypothesis is though that the built in unit test is not as good as it needs to be, or at least could be. However if instead of assert which is a terminating assertion, the system did what the Go system does and just has a way of collected test fail messages, things would be a lot better. =C2=A0This migh= t be a non-breaking change if implemented properly. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 05 2015
next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Sat, 05 Dec 2015 11:12:46 +0000, Russel Winder via Digitalmars-d wrote:

 On Sat, 2015-12-05 at 10:24 +0100, Jacob Carlborg via Digitalmars-d
 wrote:
 […]
 
 You don't want a stack trace for a failed unit test? I have never used
 a unit test framework that don't output the stack trace for a failed
 unit test. Why would you want that?
I put it the other way round: why do you want a stack trace from a failure of a unit test?
You seem to be coming from a Go background. D isn't Go. In Go, a test will fail if you call testing.T.Error[f]. It can crash if it runs into a deadlock or some code throws an exception, like on an invalid cast or array bounds error or manually calling panic(). When it crashes, it gives you a stacktrace (actually, one stacktrace per coroutine, and it usually has scads of them active, even if you didn't ask for it) and doesn't continue testing anything else. When I've encountered a panic in Go, the stacktrace was the only thing that allowed me to debug the problem instead of throwing my computer out the window and then cursing at Rob Pike for a solid hour. The fact that I got stacktraces for a dozen unrelated coroutines when I'd never started one is pure annoyance, but it's probably helpful for people debugging the Go runtime. In D, you use assert() rather than testing.T.Error[f]. But they aren't analogous. D's assert is much closer to Go's panic, except it carries with it programmatically readable information on the type of thing that caused the panic. Like panic, assert can happen anywhere. However, D's assert already gives you one line of stacktrace automatically. For simple cases, this is good enough, and the stacktrace is just noise. Since D's AssertError doesn't include the values inside the expression that failed, inside any nontrivial case, you need a stacktrace to help reconstruct what happened. Even if it did include that information, let's say you had an invariant contract on an object. The invariant contract is called implicitly (except in release builds) whenever a public method is called on the object. You see that one field on the object is wrong, and you see the problematic value. You need a stacktrace in order to have a clue *why* the invariant failed.
 The stack trace tells you nothing about the code
 under test that the test doesn't already tell you. All you need to know
 is which tests failed and why.
 
 Just because some unittests have done something in the past doesn't mean
 it is the right thing to do. The question is what does the programmer
 need for the task at hand. Stack traces add nothing useful to the
 analysis of the test pass or fail.
 
 I will be looking at dunit, specd and dcheck. The current hypothesis is
 though that the built in unit test is not as good as it needs to be, or
 at least could be.
unittest{}, as Walter has said in the past, isn't intended to have all the features you might want. It's intended to be outrageously convenient. It's intended to get everyone to write tests when they otherwise wouldn't have gone through the trouble. I seem to recall him advocating for more advanced unittesting libraries, even.
Dec 05 2015
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 2015-12-05 at 17:46 +0000, Chris Wright via Digitalmars-d
wrote:
=20
[=E2=80=A6]
 You seem to be coming from a Go background. D isn't Go.
I'm from a FORTRAN IV, FORTRAN G, Algol 68, Pascal, C, FORTRAN 77, C++, Miranda, Lisp, Scheme, Modula-2, Java, Python, Scala, Groovy, D, Haskell, Ceylon, Kotlin, Clojure, Go, Frege background, but to mention this would just seem to be bragging. ;-) =C2=A0
 In Go, a test will fail if you call testing.T.Error[f]. It can crash
 if=20
 it runs into a deadlock or some code throws an exception, like on an=20
 invalid cast or array bounds error or manually calling panic(). When
 it=20
 crashes, it gives you a stacktrace (actually, one stacktrace per=20
 coroutine, and it usually has scads of them active, even if you
 didn't=20
 ask for it) and doesn't continue testing anything else.
For the purposes of this argument, let's ignore crashes or manually executed panics. The issue is the difference in behaviour between assert and Errorf. assert in languages that use it causes an exception and this causes termination which means execution of other tests does not happen unless the framework makes sure this happens. D unittest does not. Errorf notes the failure and carries on, this is crucial important for good testing using loops. In none of these case should a stack trace ever be reported since it is irrelevant to the testing scenario.
 When I've encountered a panic in Go, the stacktrace was the only
 thing=20
 that allowed me to debug the problem instead of throwing my computer
 out=20
 the window and then cursing at Rob Pike for a solid hour. The fact
 that I=20
 got stacktraces for a dozen unrelated coroutines when I'd never
 started=20
 one is pure annoyance, but it's probably helpful for people debugging
 the=20
 Go runtime.
To be honest I am not sure how this rant contributes to the discussion of testing.
 In D, you use assert() rather than testing.T.Error[f]. But they
 aren't=20
 analogous. D's assert is much closer to Go's panic, except it carries
 with it programmatically readable information on the type of thing
 that=20
 caused the panic.
Very true, and that is core to the issue here. asserts raise exceptions which , unless handled by the testing framework properly, cause termination. This is at the heart of the problem. For data-driven testing some form of loop is required. The loop must not terminate if all the tests are to run. pytest.mark.parametrize does the right thing, as do normal loops and Errorf. D assert does the wrong thing.
 Like panic, assert can happen anywhere.
Tecnically, I guess yes, but=E2=80=A6
 However, D's assert already gives you one line of stacktrace=20
 automatically. For simple cases, this is good enough, and the
 stacktrace=20
 is just noise.
But it stops the loop. This makes the loop a fundamentally useless construct in tests. This is at the heart of the problem with the unittest construct.
 Since D's AssertError doesn't include the values inside the
 expression=20
 that failed, inside any nontrivial case, you need a stacktrace to
 help=20
 reconstruct what happened.
I think this is the evidence that proves that the current D testing framework is in need of work to make it better than it is currently.
 Even if it did include that information, let's say you had an
 invariant=20
 contract on an object. The invariant contract is called implicitly=20
 (except in release builds) whenever a public method is called on the=20
 object. You see that one field on the object is wrong, and you see
 the=20
 problematic value. You need a stacktrace in order to have a clue
 *why*=20
 the invariant failed.
If a stacktrace is needed the testing framework is inadequate. [=E2=80=A6]
=20
 unittest{}, as Walter has said in the past, isn't intended to have
 all=20
 the features you might want. It's intended to be outrageously
 convenient.=20
 It's intended to get everyone to write tests when they otherwise
 wouldn't=20
 have gone through the trouble.
Being outrageously convenient means it needs to be the tool of choice. Which currently it isn't really for anything other than trivial example-based testing on a case-by-case basis.
 I seem to recall him advocating for more advanced unittesting
 libraries,=20
 even.
dunit (being a Delphi framework) doesn't fit the bill :-) dunit being a xUnit clone doesn't really work in the same way that cppUnit doesn't. Catch is the current C++ front-runner testing framework, there should be a D equivalent. dspecs and specd are interesting but I am not sure they are being developed. dcheck is great stuff and it is sad that it has been dormant for 2 years according to the GitHub repository record. In writing this I realize I have supped once too often of the Ricard. I shall now switch to red wine and cease emailing. =C2=A0 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 05 2015
next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Sat, 05 Dec 2015 20:44:53 +0000, Russel Winder via Digitalmars-d wrote:

 If a stacktrace is needed the testing framework is inadequate.
But there are problems with saying that the builtin assert function should show the entire expression with operand values, nicely formatted. assert has to serve both unittesting and contract programming. When dealing with contract programming and failed contracts, you risk objects being in invalid states. Trying to call methods on such objects in order to provide descriptive error messages is risky. A helpful stacktrace might be transformed into a segmentation fault, for instance. Or an assert error might be raised while attempting to report an assert error. assert is a builtin function. It's part of the runtime. That puts rather strict constraints on how much it can do. The runtime can't depend on the standard library, for instance, so if you want assert() to include the values that were problematic, the runtime has to include that formatting code. That doesn't seem like a lot on its own, but std.format is probably a couple thousand lines of code. (About 3,000 semicolons, including unittests.) I would like these nicely formatted messages. I don't think it's reasonably practical to add them to assert. I'll spend some thought on how to implement them outside the runtime, for a testing framework, though I'm not optimistic on a nice API. Catch does it with macros and by parsing C++, and the nearest equivalent in D is string mixins, which are syntactically more complex. Spock does it with a compiler plugin. I know I can do it with strings and string mixins, but that's not exactly going to be a clean API.
 unittest{}, as Walter has said in the past, isn't intended to have all
 the features you might want. It's intended to be outrageously
 convenient.
 It's intended to get everyone to write tests when they otherwise
 wouldn't have gone through the trouble.
Being outrageously convenient means it needs to be the tool of choice.
I don't see why those would need to be related. To be useful, it has to be good enough to catch errors and convenient enough to induce people to test when they wouldn't otherwise.
 Which currently it isn't really for anything other than trivial
 example-based testing on a case-by-case basis.
Most unittesting is a developer testing a handful of specific cases that seem interesting or useful to test. It's easy, and it's usually good enough. Property testing like in dcheck requires you to be able to produce a series of algorithms that jointly create input/output pairs that cover the entire domain of the function under test. That's often difficult. And it's mainly applicable for pure functions, not interaction-based testing. Random example: I wrote a URL parsing library. How would I test it using dcheck? I would come up with a series of URL parts according to patterns that I thought up and test based on those. That's basically identical to coming up with specific URLs myself, except I have to implement a second algorithm to join parts into a URL. The coverage is pretty much the same. I get to see if it barfs on some strange character, or more likely I'll find that my test URL generator produces invalid URLs. That would cost me a lot more time and thought than just writing out a few examples. It would have coverage holes because of valid URL patterns I hadn't thought of. Overall no benefit. On the other hand, as part of this library I implemented a punycode codec. I can use dcheck or a similar system to generate random valid Unicode strings. I can't use this to ensure that my codec is correct relative to the RFC, but I can at least ensure that the encoding is self-consistent. So it's a good tool to have available, but I do need other ways to test my code.
Dec 05 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-12-06 00:09, Chris Wright wrote:

 But there are problems with saying that the builtin assert function
 should show the entire expression with operand values, nicely formatted.

 assert has to serve both unittesting and contract programming. When
 dealing with contract programming and failed contracts, you risk objects
 being in invalid states. Trying to call methods on such objects in order
 to provide descriptive error messages is risky. A helpful stacktrace
 might be transformed into a segmentation fault, for instance. Or an
 assert error might be raised while attempting to report an assert error.

 assert is a builtin function. It's part of the runtime. That puts rather
 strict constraints on how much it can do. The runtime can't depend on the
 standard library, for instance, so if you want assert() to include the
 values that were problematic, the runtime has to include that formatting
 code. That doesn't seem like a lot on its own, but std.format is probably
 a couple thousand lines of code. (About 3,000 semicolons, including
 unittests.)

 I would like these nicely formatted messages. I don't think it's
 reasonably practical to add them to assert. I'll spend some thought on
 how to implement them outside the runtime, for a testing framework,
 though I'm not optimistic on a nice API. Catch does it with macros and by
 parsing C++, and the nearest equivalent in D is string mixins, which are
 syntactically more complex. Spock does it with a compiler plugin. I know
 I can do it with strings and string mixins, but that's not exactly going
 to be a clean API.
Another good use case for AST macros. -- /Jacob Carlborg
Dec 06 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-12-05 21:44, Russel Winder via Digitalmars-d wrote:

 For the purposes of this argument, let's ignore crashes or manually
 executed panics. The issue is the difference in behaviour between
 assert and Errorf. assert in languages that use it causes an exception
 and this causes termination which means execution of other tests does
 not happen unless the framework makes sure this happens. D unittest
 does not. Errorf notes the failure and carries on, this is crucial
 important for good testing using loops.
I think that the default test runner is completely broken for terminating the complete test suite if a test fails. Although I do think I should terminate the rest of the test that failed. I also don't think one should test using loops.
 Very true, and that is core to the issue here. asserts raise exceptions
 which , unless handled by the testing framework properly, cause
 termination. This is at the heart of the problem. For data-driven
 testing some form of loop is required. The loop must not terminate if
 all the tests are to run. pytest.mark.parametrize does the right thing,
 as do normal loops and Errorf. D assert does the wrong thing.
Nothing says that you have to use assert in a unit test ;) I'm not sure how your data looks like or what you're actually testing. But when I had the need to test multiple values it was either a data structure, then I could do one assert for the whole data structure. Or I used multiple tests.
 I think this is the evidence that proves that the current D testing
 framework is in need of work to make it better than it is currently.
Absolutely, the built in support is almost completely broken.
 If a stacktrace is needed the testing framework is inadequate.
I guess it depends on how you write your tests. If you only test a single function which doesn't call anything else that will work. But as soon ass the function you're testing calls other functions a stack trace is really needed. What do you do when you get a test failure due to some exception/assertion is thrown deep inside some code you have never seen before and how no idea how the execution got there?
 dspecs
I'm not sure if you're referring to my "framework" [1] or this one [2]. But none of them will catch any exception and behave just as the standard test runner. But would like to implement a custom runner that catches assertions and continues with the next tests.
 and specd are
This one seems to only catch "MatchException". So if any other exception is thrown, including assert error, it will have the same behavior as the standard test runner. [1] https://github.com/jacob-carlborg/dspec [2] https://github.com/youxkei/dspecs -- /Jacob Carlborg
Dec 06 2015
parent Chris Wright <dhasenan gmail.com> writes:
On Sun, 06 Dec 2015 12:11:08 +0100, Jacob Carlborg wrote:

 I also don't think one should test using loops.
Table based testing is quite handy in a number of circumstances as long as you're using a framework that makes it viable. Asserts that throw exceptions make it far less viable. One other reason it works in Go is that you already have a tradition of laboriously constructing descriptive error messages by hand due to the lack of stacktraces. But since Spock automates that for you, it would be more viable with Spock than with D's default unittests.
Dec 06 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-12-05 12:12, Russel Winder via Digitalmars-d wrote:

 I put it the other way round: why do you want a stack trace from a
 failure of a unit test? The stack trace tells you nothing about the
 code under test that the test doesn't already tell you. All you need to
 know is which tests failed and why. This of course requires power
 asserts or horrible things like assertEqual and the like to know the
 state that caused the assertion fail. For me, PyTest is the model
 system here, along with Spock, and ScalaTest. Perhaps also Catch.
ScalaTest will print a stack trace on failure, at least when I run it from inside Eclipse. So will RSpec which I'm guessing ScalaTest is modeled after. In RSpec, with the default formatter it will print a dot for a passed test and a F for a failed test. Then at the end it will print the stack traces for all failed tests.
 Just because some unittests have done something in the past doesn't
 mean it is the right thing to do. The question is what does the
 programmer need for the task at hand. Stack traces add nothing useful
 to the analysis of the test pass or fail.
I guess it depends on how you write your tests. If you only test a single function which doesn't call anything else that will work. But as soon as the function you're testing calls other functions a stack trace is really needed. What do you do when you get a test failure due to some exception/assertion is thrown deep inside some code you have never seen before and how no idea how the execution got there?
 I will be looking at dunit, specd and dcheck. The current hypothesis is
 though that the built in unit test is not as good as it needs to be, or
 at least could be.
The built-in runner is so bad it's almost broken. -- /Jacob Carlborg
Dec 06 2015
prev sibling next sibling parent Atila Neves <atila.neves gmail.com> writes:
On Friday, 4 December 2015 at 19:00:37 UTC, Russel Winder wrote:
 … is completely hideous, or am I unique in objecting to the 
 mess of output you get on a test fail?
You're not alone. Hence the number of D unit testing libraries. I'm partial to mine, unit-threaded. I'd link but I'm on a tablet. Atila
Dec 05 2015
prev sibling parent reply Chris Wright <dhasenan gmail.com> writes:
I quickly hacked up something to make assertions slightly more verbose: 
http://dpaste.dzfl.pl/f94b6ed80b3a

This can be extended quite a bit without a ton of effort, but it would 
eventually devolve into fully parsing D using compile-time function 
execution. Still, Catch can't even handle logical or, so it should be 
trivial to beat it in terms of quality of error reports. No real hope of 
matching Spock.

The interface leaves something to be desired:
mixin enforce!(q{i == j});

Say what you will, C preprocessor macros are very low on syntactic 
overhead.

The other ways I know of for passing in an expression involve eager 
evaluation or convert the expression to an opaque delegate. The mixin is 
required in order to access local variables.

The name "enforce" is obviously not appropriate, and it should ideally 
have pluggable error reporting mechanisms. But for a first hack, it's not 
so bad.

I might clean this up and put it on DUB.
Dec 05 2015
parent Atila Neves <atila.neves gmail.com> writes:
On Sunday, 6 December 2015 at 03:23:53 UTC, Chris Wright wrote:
 I quickly hacked up something to make assertions slightly more 
 verbose: http://dpaste.dzfl.pl/f94b6ed80b3a

 This can be extended quite a bit without a ton of effort, but 
 it would eventually devolve into fully parsing D using 
 compile-time function execution. Still, Catch can't even handle 
 logical or, so it should be trivial to beat it in terms of 
 quality of error reports. No real hope of matching Spock.

 The interface leaves something to be desired:
 mixin enforce!(q{i == j});

 Say what you will, C preprocessor macros are very low on 
 syntactic overhead.

 The other ways I know of for passing in an expression involve 
 eager evaluation or convert the expression to an opaque 
 delegate. The mixin is required in order to access local 
 variables.

 The name "enforce" is obviously not appropriate, and it should 
 ideally have pluggable error reporting mechanisms. But for a 
 first hack, it's not so bad.

 I might clean this up and put it on DUB.
I guess you missed the discussions on std.experimental.testing? I thought of doing something like your enforce, but decided it was too ugly and unwieldy. It's the only way to copy what Catch does... but unfortunately it's not as nice to read or write. Like you, I came to the realization that at least this once preprocessor macros made things easier. I still think that considering all the alternatives, the `should` functions in unit-threaded are the best way to go. Not surprising since I wrote them, but still. Atila
Dec 07 2015