www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Parallel execution of unittests

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Hello,


A coworker mentioned the idea that unittests could be run in parallel 
(using e.g. a thread pool). I've rigged things to run in parallel 
unittests across modules, and that works well. However, this is too 
coarse-grained - it would be great if each unittest could be pooled 
across the thread pool. That's more difficult to implement.

This brings up the issue of naming unittests. It's becoming increasingly 
obvious that anonymous unittests don't quite scale - coworkers are 
increasingly talking about "the unittest at line 2035 is failing" and 
such. With unittests executing in multiple threads and issuing e.g. 
logging output, this is only likely to become more exacerbated. We've 
resisted named unittests but I think there's enough evidence to make the 
change.

Last but not least, virtually nobody I know runs unittests and then 
main. This is quickly becoming an idiom:

version(unittest) void main() {}
else void main()
{
    ...
}

I think it's time to change that. We could do it the 
non-backward-compatible way by redefining -unittest to instruct the 
compiler to not run main. Or we could define another flag such as 
-unittest-only and then deprecate the existing one.

Thoughts? Would anyone want to work on such stuff?


Andrei
Apr 30 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in 
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
 We've resisted named unittests but I think there's enough
 evidence to make the change.
Yes, the optional name for unittests is an improvement: unittest {} unittest foo {} I am very glad your coworker find such usability problems :-)
 We could do it the non-backward-compatible way by
 redefining -unittest to instruct the compiler to not run main.
Good. I'd also like some built-in way (or partially built-in) to use a module only as "main module" (to run its demos) or as module to be imported. This problem is solved in Python with the "if __name__ == "__main__":" idiom. Bye, bearophile
Apr 30 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
Apr 30 2014
next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 08:59:42 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests. While it's not something that I generally like to do, I know that we have instances of that where I work. Also, if the unit tests have to deal with shared resources, they may very well be theoretically independent but would run afoul of each other if run at the same time - a prime example of this would be std.file, which has to operate on the file system. I fully expect that if std.file's unit tests were run in parallel, they would break. Unit tests involving sockets would be another type of test which would be at high risk of breaking, depending on what sockets they need. Honestly, the idea of running unit tests in parallel makes me very nervous. In general, across modules, I'd expect it to work, but there will be occasional cases where it will break. Across the unittest blocks in a single module, I'd be _very_ worried about breakage. There is nothing whatsoever in the language which guarantees that running them in parallel will work or even makes sense. All that protects us is the convention that unit tests are usually independent of each other, and in my experience, it's common enough that they're not independent that I think that blindly enabling parallelization of unit tests across a single module is definitely a bad idea. - Jonathan M Davis
Apr 30 2014
next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 30 April 2014 at 17:50:34 UTC, Jonathan M Davis via 
Digitalmars-d wrote:
 On Wed, 30 Apr 2014 08:59:42 -0700
 Andrei Alexandrescu via Digitalmars-d 
 <digitalmars-d puremagic.com>
 wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests. While it's not something that I generally like to do, I know that we have instances of that where I work. Also, if the unit tests have to deal with shared resources, they may very well be theoretically independent but would run afoul of each other if run at the same time - a prime example of this would be std.file, which has to operate on the file system. I fully expect that if std.file's unit tests were run in parallel, they would break. Unit tests involving sockets would be another type of test which would be at high risk of breaking, depending on what sockets they need. Honestly, the idea of running unit tests in parallel makes me very nervous. In general, across modules, I'd expect it to work, but there will be occasional cases where it will break. Across the unittest blocks in a single module, I'd be _very_ worried about breakage. There is nothing whatsoever in the language which guarantees that running them in parallel will work or even makes sense. All that protects us is the convention that unit tests are usually independent of each other, and in my experience, it's common enough that they're not independent that I think that blindly enabling parallelization of unit tests across a single module is definitely a bad idea. - Jonathan M Davis
You're right; blindly enabling parallelisation after the fact is likely to cause problems. Unit tests though, by definition (and I'm aware there are more than one) have to be independent. Have to not touch the filesystem, or the network. Only CPU and RAM. In my case, and since I had the luxury of implementing a framework first and only writing tests after it was done, running them in parallel was an extra check that they are in fact independent. Now, it does happen that you're testing code that isn't thread-safe itself, and yes, in that case you have to run them in a single thread. That's why I added the SingleThreaded UDA to my library to enable that. As soon as I tried calling legacy C code... We could always make running in threads opt-in. Atila
Apr 30 2014
next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 17:58:34 +0000
Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 Unit tests though, by definition (and I'm aware there are more 
 than one) have to be independent. Have to not touch the 
 filesystem, or the network. Only CPU and RAM.
I disagree with this. A unit test is a test that tests a single piece of functionality - generally a function - and there are functions which have to access the file system or network. And those tests are done in unittest blocks just like any other unit test. I would very much consider std.file's tests to be unit tests. But even if you don't want to call them unit tests, because they access the file system, the reality of the matter is that tests like them are going to be run in unittest blocks, and we have to take that into account when we decide how we want unittest blocks to be run (e.g. whether they're parallelizable or not). - Jonathan M Davis
Apr 30 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 30 April 2014 at 18:19:34 UTC, Jonathan M Davis via 
Digitalmars-d wrote:
 On Wed, 30 Apr 2014 17:58:34 +0000
 Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:
 Unit tests though, by definition (and I'm aware there are more 
 than one) have to be independent. Have to not touch the 
 filesystem, or the network. Only CPU and RAM.
I disagree with this. A unit test is a test that tests a single piece of functionality - generally a function - and there are functions which have to access the file system or network.
They _use_ access to file system or network, but it is _not_ their functionality. Unit testing is all about verifying small perfectly separated pieces of functionality which don't depend on correctness / stability of any other functions / programs. Doing I/O goes against it pretty much by definition and is unfortunately one of most common testing antipatterns.
Apr 30 2014
next sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 21:23, Dicebot a écrit :
 On Wednesday, 30 April 2014 at 18:19:34 UTC, Jonathan M Davis via
 Digitalmars-d wrote:
 On Wed, 30 Apr 2014 17:58:34 +0000
 Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 Unit tests though, by definition (and I'm aware there are more than
 one) have to be independent. Have to not touch the filesystem, or the
 network. Only CPU and RAM.
I disagree with this. A unit test is a test that tests a single piece of functionality - generally a function - and there are functions which have to access the file system or network.
They _use_ access to file system or network, but it is _not_ their functionality. Unit testing is all about verifying small perfectly separated pieces of functionality which don't depend on correctness / stability of any other functions / programs. Doing I/O goes against it pretty much by definition and is unfortunately one of most common testing antipatterns.
Splitting all features at an absolute atomic level can be achieve for open-source libraries, but it's pretty much impossible for an industrial software. Why being so restrictive when it's possible to support both vision by extending a little the language by something already logical?
Apr 30 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 01:45:21 UTC, Xavier Bigand wrote:
 Splitting all features at an absolute atomic level can be 
 achieve for open-source libraries, but it's pretty much 
 impossible for an industrial software. Why being so restrictive 
 when it's possible to support both vision by extending a little 
 the language by something already logical?
You are pretty much saying here "writing good code is possible for open-source libraries but not for industrial software".
May 01 2014
parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 01/05/2014 09:23, Dicebot a écrit :
 On Thursday, 1 May 2014 at 01:45:21 UTC, Xavier Bigand wrote:
 Splitting all features at an absolute atomic level can be achieve for
 open-source libraries, but it's pretty much impossible for an
 industrial software. Why being so restrictive when it's possible to
 support both vision by extending a little the language by something
 already logical?
You are pretty much saying here "writing good code is possible for open-source libraries but not for industrial software".
It's just a lot harder when you are under pressure. I am working for a very small company and our dead lines clearly doesn't help us with that, and because I am in the video game industry it's not really critical to have small bugs. Not every body have the capacity or resources (essentially time) to design his code in the pure conformance of unittests definition, and IMO isn't not an excuse to avoid tests completely. If a language/standard library can help democratization of tests it's a good thing, so maybe writing tests have to stay relatively simple and straightforward. My point is just when you are doing things only for you it's often simpler to them like they must be.
May 01 2014
parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 12:04:57 UTC, Xavier Bigand wrote:
 It's just a lot harder when you are under pressure.
 I am working for a very small company and our dead lines 
 clearly doesn't help us with that, and because I am in the 
 video game industry it's not really critical to have small bugs.

 Not every body have the capacity or resources (essentially 
 time) to design his code in the pure conformance of unittests 
 definition, and IMO isn't not an excuse to avoid tests 
 completely.
 If a language/standard library can help democratization of 
 tests it's a good thing, so maybe writing tests have to stay 
 relatively simple and straightforward.

 My point is just when you are doing things only for you it's 
 often simpler to them like they must be.
I know that and don't have luxury of time for perfect tests either :) But it is more about state of mind than actual time consumption - once you start keeping higher level tests with I/O separate and making observation how some piece of functionality can be tested in contained way, you approach to designing modules changes. At some point one simply starts to write unit test friendly modules from the very first go, it is all about actually thinking into it. Using less OOP and more functional programming helps with that btw :) I can readily admit that in real industry projects one is likely to do many different "dirty" things and this is inevitable. What I do object to is statement that this is the way to go in general, especially in language standard library.
May 01 2014
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 30/04/2014 20:23, Dicebot wrote:
 On Wednesday, 30 April 2014 at 18:19:34 UTC, Jonathan M Davis via
 Digitalmars-d wrote:
 On Wed, 30 Apr 2014 17:58:34 +0000
 Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 Unit tests though, by definition (and I'm aware there are more than
 one) have to be independent. Have to not touch the filesystem, or the
 network. Only CPU and RAM.
I disagree with this. A unit test is a test that tests a single piece of functionality - generally a function - and there are functions which have to access the file system or network.
They _use_ access to file system or network, but it is _not_ their functionality. Unit testing is all about verifying small perfectly separated pieces of functionality which don't depend on correctness / stability of any other functions / programs. Doing I/O goes against it pretty much by definition and is unfortunately one of most common testing antipatterns.
It is common, but it is not necessarily an anti-pattern. Rather it likely is just an Integration test instead of a Unit test. See: http://forum.dlang.org/post/lkb0jm$vp8$1 digitalmars.com -- Bruno Medeiros https://twitter.com/brunodomedeiros
May 06 2014
prev sibling parent =?UTF-8?B?IsOBdGlsYQ==?= Neves" <atila.neves gmail.com> writes:
On Wednesday, 30 April 2014 at 18:19:34 UTC, Jonathan M Davis via 
Digitalmars-d wrote:
 On Wed, 30 Apr 2014 17:58:34 +0000
 Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:
 Unit tests though, by definition (and I'm aware there are more 
 than one) have to be independent. Have to not touch the 
 filesystem, or the network. Only CPU and RAM.
I disagree with this. A unit test is a test that tests a single piece of functionality - generally a function - and there are functions which have to access the file system or network. And those tests are done in unittest blocks just like any other unit test. I would very much consider std.file's tests to be unit tests. But even if you don't want to call them unit tests, because they access the file system, the reality of the matter is that tests like them are going to be run in unittest blocks, and we have to take that into account when we decide how we want unittest blocks to be run (e.g. whether they're parallelizable or not). - Jonathan M Davis
On what's a unit test: I +1 everything Dicebot and Russell Winder said. Of course there are functions with side effects. Of course they should be tested. But those tests aren't unit tests. Which won't stop people from using a unit test framework to run them. In fact, every test I've ever written using python's unittest module was an integration test. But again, you're right. Whatever changes happen have to take into account the current status. And the current status makes it difficult if not impossible to run existing tests in multiple threads by default. One could argue that the Phobos tests should be changed too, but that won't help with the existing client codebase out there.
Apr 30 2014
prev sibling next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 2014-04-30 at 11:19 -0700, Jonathan M Davis via Digitalmars-d
wrote:
[…]
 I disagree with this. A unit test is a test that tests a single piece
 of functionality - generally a function - and there are functions which
 have to access the file system or network. And those tests are done in
These are integration/system tests not unit tests. For unit tests network activity should be mocked out.
 unittest blocks just like any other unit test. I would very much
 consider std.file's tests to be unit tests. But even if you don't
 want to call them unit tests, because they access the file system, the
 reality of the matter is that tests like them are going to be run in
 unittest blocks, and we have to take that into account when we decide
 how we want unittest blocks to be run (e.g. whether they're
 parallelizable or not).
In which case D is wrong to allow them in the unittest blocks and should introduce a new way of handling these tests. And even then all tests can and should be parallelized. If they cannot be then there is an inappropriate dependency. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Apr 30 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 1:09 PM, Russel Winder via Digitalmars-d wrote:
 And even then all tests can
 and should be parallelized. If they cannot be then there is an
 inappropriate dependency.
Agreed. -- Andrei
Apr 30 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 22:09, Russel Winder via Digitalmars-d a écrit :
 On Wed, 2014-04-30 at 11:19 -0700, Jonathan M Davis via Digitalmars-d
 wrote:
 […]
 I disagree with this. A unit test is a test that tests a single piece
 of functionality - generally a function - and there are functions which
 have to access the file system or network. And those tests are done in
These are integration/system tests not unit tests. For unit tests network activity should be mocked out.
And how you do when your mock is bugged? Or you risk to have the mock up to date when changing the code but not the running application cause before the commit you'll run only your unittests. IMO every tests can be automatize and run in a few time have to be run before each commit even if some are integration tests.
 unittest blocks just like any other unit test. I would very much
 consider std.file's tests to be unit tests. But even if you don't
 want to call them unit tests, because they access the file system, the
 reality of the matter is that tests like them are going to be run in
 unittest blocks, and we have to take that into account when we decide
 how we want unittest blocks to be run (e.g. whether they're
 parallelizable or not).
In which case D is wrong to allow them in the unittest blocks and should introduce a new way of handling these tests. And even then all tests can and should be parallelized. If they cannot be then there is an inappropriate dependency.
Apr 30 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 10:58 AM, Atila Neves wrote:
 We could always make running in threads opt-in.
Yah, great idea. -- Andrei
Apr 30 2014
prev sibling next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 21:09:14 +0100
Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Wed, 2014-04-30 at 11:19 -0700, Jonathan M Davis via Digitalmars-d
 wrote:
 unittest blocks just like any other unit test. I would very much
 consider std.file's tests to be unit tests. But even if you don't
 want to call them unit tests, because they access the file system,
 the reality of the matter is that tests like them are going to be
 run in unittest blocks, and we have to take that into account when
 we decide how we want unittest blocks to be run (e.g. whether
 they're parallelizable or not).
In which case D is wrong to allow them in the unittest blocks and should introduce a new way of handling these tests. And even then all tests can and should be parallelized. If they cannot be then there is an inappropriate dependency.
Why? Because Andrei suddenly proposed that we parallelize unittest blocks? If I want to test a function, I'm going to put a unittest block after it to test it. If that means accessing I/O, then it means accessing I/O. If that means messing with mutable, global variables, then that means messing with mutable, global variables. Why should I have to put the tests elsewhere or make is that they don't run whenthe -unttest flag is used just because they don't fall under your definition of "unit" test? There is nothing in the language which has ever mandated that unittest blocks be parallelizable or that they be pure (which is essentially what you're saying all unittest blocks should be). And restricting unittest blocks so that they have to be pure (be it conceptually pure or actually pure) would be a _loss_ of functionality. Sure, let's make it possible to parallelize unittest blocks where appropriate, but I contest that we should start requiring that unittest blocks be pure (which is what a function has to be in order to be pararellized whether it's actually marked as pure or not). That would force us to come up with some other testing mechanism to run those tests when there is no need to do so (and I would argue that there is no compelling reason to do so other than ideology with regards to what is truly a "unit" test). On the whole, I think that unittest blocks work very well as they are. If we want to expand on their features, then great, but let's do so without adding new restrictions to them. - Jonathan M Davis
Apr 30 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 30 April 2014 at 21:49:06 UTC, Jonathan M Davis via 
Digitalmars-d wrote:
 On Wed, 30 Apr 2014 21:09:14 +0100
 Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 On Wed, 2014-04-30 at 11:19 -0700, Jonathan M Davis via 
 Digitalmars-d
 wrote:
 unittest blocks just like any other unit test. I would very 
 much
 consider std.file's tests to be unit tests. But even if you 
 don't
 want to call them unit tests, because they access the file 
 system,
 the reality of the matter is that tests like them are going 
 to be
 run in unittest blocks, and we have to take that into 
 account when
 we decide how we want unittest blocks to be run (e.g. whether
 they're parallelizable or not).
In which case D is wrong to allow them in the unittest blocks and should introduce a new way of handling these tests. And even then all tests can and should be parallelized. If they cannot be then there is an inappropriate dependency.
Why? Because Andrei suddenly proposed that we parallelize unittest blocks? If I want to test a function, I'm going to put a unittest block after it to test it. If that means accessing I/O, then it means accessing I/O. If that means messing with mutable, global variables, then that means messing with mutable, global variables. Why should I have to put the tests elsewhere or make is that they don't run whenthe -unttest flag is used just because they don't fall under your definition of "unit" test?
You do this because unit tests must be fast. You do this because unit tests must be naively parallel. You do this because unit tests verify basic application / library sanity and expected to be quickly run after every build in deterministic way (contrary to full test suite which can take hours). Also you do that because doing _reliably_ correct tests with I/O is relatively complicated and one does not want to pollute actual source modules with all environment checks. In the end it is all about supporting quick edit-compile-test development cycle.
May 01 2014
parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 01/05/2014 08:18, Dicebot wrote:
 On Wednesday, 30 April 2014 at 21:49:06 UTC, Jonathan M Davis via
 Digitalmars-d wrote:
 On Wed, 30 Apr 2014 21:09:14 +0100
 Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Wed, 2014-04-30 at 11:19 -0700, Jonathan M Davis via Digitalmars-d
 wrote:
 unittest blocks just like any other unit test. I would very > much
 consider std.file's tests to be unit tests. But even if you > don't
 want to call them unit tests, because they access the file > system,
 the reality of the matter is that tests like them are going > to be
 run in unittest blocks, and we have to take that into > account when
 we decide how we want unittest blocks to be run (e.g. whether
 they're parallelizable or not).
In which case D is wrong to allow them in the unittest blocks and should introduce a new way of handling these tests. And even then all tests can and should be parallelized. If they cannot be then there is an inappropriate dependency.
Why? Because Andrei suddenly proposed that we parallelize unittest blocks? If I want to test a function, I'm going to put a unittest block after it to test it. If that means accessing I/O, then it means accessing I/O. If that means messing with mutable, global variables, then that means messing with mutable, global variables. Why should I have to put the tests elsewhere or make is that they don't run whenthe -unttest flag is used just because they don't fall under your definition of "unit" test?
You do this because unit tests must be fast. You do this because unit tests must be naively parallel. You do this because unit tests verify basic application / library sanity and expected to be quickly run after every build in deterministic way (contrary to full test suite which can take hours).
See http://forum.dlang.org/post/lkb0jm$vp8$1 digitalmars.com. (basically, do we want to support only Unit tests, or Integration tests also?) -- Bruno Medeiros https://twitter.com/brunodomedeiros
May 06 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Apr 30, 2014 at 02:48:38PM -0700, Jonathan M Davis via Digitalmars-d
wrote:
 On Wed, 30 Apr 2014 21:09:14 +0100
 Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 In which case D is wrong to allow them in the unittest blocks and
 should introduce a new way of handling these tests. And even then
 all tests can and should be parallelized. If they cannot be then
 there is an inappropriate dependency.
Why? Because Andrei suddenly proposed that we parallelize unittest blocks? If I want to test a function, I'm going to put a unittest block after it to test it. If that means accessing I/O, then it means accessing I/O. If that means messing with mutable, global variables, then that means messing with mutable, global variables. Why should I have to put the tests elsewhere or make is that they don't run whenthe -unttest flag is used just because they don't fall under your definition of "unit" test?
[...] What about allowing pure marking on unittests, and those unittests that are marked pure will be parallelized, and those that aren't marked will be run serially? T -- Amateurs built the Ark; professionals built the Titanic.
Apr 30 2014
parent =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= <per.nordlow gmail.com> writes:
 What about allowing pure marking on unittests, and those 
 unittests that
 are marked pure will be parallelized, and those that aren't 
 marked will
 be run serially?
I guess that goes for inferred purity aswell...
Apr 30 2014
prev sibling next sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 19:58, Atila Neves a écrit :
 On Wednesday, 30 April 2014 at 17:50:34 UTC, Jonathan M Davis via
 Digitalmars-d wrote:
 On Wed, 30 Apr 2014 08:59:42 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests. While it's not something that I generally like to do, I know that we have instances of that where I work. Also, if the unit tests have to deal with shared resources, they may very well be theoretically independent but would run afoul of each other if run at the same time - a prime example of this would be std.file, which has to operate on the file system. I fully expect that if std.file's unit tests were run in parallel, they would break. Unit tests involving sockets would be another type of test which would be at high risk of breaking, depending on what sockets they need. Honestly, the idea of running unit tests in parallel makes me very nervous. In general, across modules, I'd expect it to work, but there will be occasional cases where it will break. Across the unittest blocks in a single module, I'd be _very_ worried about breakage. There is nothing whatsoever in the language which guarantees that running them in parallel will work or even makes sense. All that protects us is the convention that unit tests are usually independent of each other, and in my experience, it's common enough that they're not independent that I think that blindly enabling parallelization of unit tests across a single module is definitely a bad idea. - Jonathan M Davis
You're right; blindly enabling parallelisation after the fact is likely to cause problems. Unit tests though, by definition (and I'm aware there are more than one) have to be independent. Have to not touch the filesystem, or the network. Only CPU and RAM. In my case, and since I had the luxury of implementing a framework first and only writing tests after it was done, running them in parallel was an extra check that they are in fact independent.
Why a test don't have to touch filesystem? That really restrictive, you just can't have a good code coverage on a lot libraries with a such restriction. I had work on a Source Control Management software, and all tests have to deal with a DB which requires file system and network operations. IMO it's pretty much like impossible to miss testing of functions relations, simple integration tests are often needed to ensure that the application is working correctly. If D integrate features to support automatized testing maybe it must not be to restrictive mainly if everybody will expect more features commonly used (like named tests, formated result output,...). Some of those common features have to be added to phobos instead of the language.
 Now, it does happen that you're testing code that isn't thread-safe
 itself, and yes, in that case you have to run them in a single thread.
 That's why I added the  SingleThreaded UDA to my library to enable that.
 As soon as I tried calling legacy C code...

 We could always make running in threads opt-in.

 Atila
Apr 30 2014
prev sibling parent Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 15:33:17 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 On Wed, Apr 30, 2014 at 02:48:38PM -0700, Jonathan M Davis via
 Digitalmars-d wrote:
 On Wed, 30 Apr 2014 21:09:14 +0100
 Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 In which case D is wrong to allow them in the unittest blocks and
 should introduce a new way of handling these tests. And even then
 all tests can and should be parallelized. If they cannot be then
 there is an inappropriate dependency.
Why? Because Andrei suddenly proposed that we parallelize unittest blocks? If I want to test a function, I'm going to put a unittest block after it to test it. If that means accessing I/O, then it means accessing I/O. If that means messing with mutable, global variables, then that means messing with mutable, global variables. Why should I have to put the tests elsewhere or make is that they don't run whenthe -unttest flag is used just because they don't fall under your definition of "unit" test?
[...] What about allowing pure marking on unittests, and those unittests that are marked pure will be parallelized, and those that aren't marked will be run serially?
I think that that would work, and if we added purity inferrence to unittest blocks as Nordlow suggests, then you wouldn't even have to mark them as pure unless you wanted to enforce that it be runnable in parallel. - Jonathan M Davis
Apr 30 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 10:50 AM, Jonathan M Davis via Digitalmars-d wrote:
 There
 is nothing whatsoever in the language which guarantees that running
 them in parallel will work or even makes sense.
Default thread-local globals? -- Andrei
Apr 30 2014
next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 13:26:40 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On 4/30/14, 10:50 AM, Jonathan M Davis via Digitalmars-d wrote:
 There
 is nothing whatsoever in the language which guarantees that running
 them in parallel will work or even makes sense.
Default thread-local globals? -- Andrei
Sure, that helps, but it's trivial to write a unittest block which depends on a previous unittest block, and as soon as a unittest block uses an external resource such as a socket or file, then even if a unittest block doesn't directly depend on the end state of a previous unittest block, it still depends on external state which could be affected by other unittest blocks. So, ultimately, the language really doesn't ensure that running a unittest block can be parallelized. If it's pure as bearophile suggested, then it can be done, but as long as a unittest block is impure, then it can rely on global state - even inadvertently - (be it state directly in the program or state outside the program) and therefore not work when pararellized. So, I suppose that you could parallelize unittest blocks if they were marked as pure (though I'm not sure if that's currently a legal thing to do), but impure unittest blocks aren't guaranteed to be parallelizable. I'm all for making it possible to parallelize unittest block execution, but as it stands, doing so automatically would be a bad idea. We could make it so that a unittest block could be marked as parallelizable, or we could even move towards making parallelizable the default and require that a unittest block be marked as unparallelizable, but we'd have to be very careful with that, as it will break code if we're not careful about how we do that transition. I'm inclined to think that marking unittest blocks as pure to parallelize them is a good idea, because then the unittest blocks that are guaranteed to be parallelizable are run in parallel, whereas those that aren't wouldn't be. The primary dowside would be that the cases where the programmer knew that they could be parallelized but they weren't pure, since those unittest blocks wouldn't be parallelized. - Jonathan M Davis
Apr 30 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 2:25 PM, Jonathan M Davis via Digitalmars-d wrote:
 Sure, that helps, but it's trivial to write a unittest block which
 depends on a previous unittest block, and as soon as a unittest block
 uses an external resource such as a socket or file, then even if a
 unittest block doesn't directly depend on the end state of a
 previous unittest block, it still depends on external state which could
 be affected by other unittest blocks. So, ultimately, the language
 really doesn't ensure that running a unittest block can be
 parallelized. If it's pure as bearophile suggested, then it can be
 done, but as long as a unittest block is impure, then it can rely on
 global state - even inadvertently - (be it state directly in the program
 or state outside the program) and therefore not work when pararellized.
 So, I suppose that you could parallelize unittest blocks if they were
 marked as pure (though I'm not sure if that's currently a legal thing
 to do), but impure unittest blocks aren't guaranteed to be
 parallelizable.
Agreed. I think we should look into parallelizing all unittests. -- Andrei
Apr 30 2014
next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 14:35:45 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:
 Agreed. I think we should look into parallelizing all unittests. --
I'm all for parallelizing all unittest blocks that are pure, as doing so would be safe, but I think that we're making a big mistake if we try and insist that all unittest blocks be able to be run in parallel. Any that aren't pure are not guaranteed to be parallelizable, and any which access system resources or other global, mutable state stand a good chance of breaking. If we make it so that the functions generated from unittest blocks have their purity inferred, then any unittest block which can safely be parallelized could then be parallelized by the test runner based on their purity, and any impure unittest functions could then be safely run in serial. And if you want to make sure that a unittest block is parallizable, then you can just explicitly mark it as pure. With that approach, we don't risk breaking existing unit tests, and it allows tests that need to not be run in parallel to work properly by guaranteeing that they're still run serially. And it even make it so that many tests are automatically parallelizable without the programmer having to do anything special for it. - Jonathan M Davis
Apr 30 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 10:01 PM, Jonathan M Davis via Digitalmars-d wrote:
 I'm all for parallelizing all unittest blocks that are pure, as doing
 so would be safe, but I think that we're making a big mistake if we try
 and insist that all unittest blocks be able to be run in parallel. Any
 that aren't pure are not guaranteed to be parallelizable, and any which
 access system resources or other global, mutable state stand a good
 chance of breaking.
There are a number of assumptions here: (a) most unittests that can be effectively parallelized can be actually inferred (or declared) as pure; (b) most unittests that cannot be inferred as pure are likely to break; (c) it's a big deal if unittests break. I question all of these assumptions. In particular I consider unittests that depend on one another an effective antipattern that needs to be eradicated. Andrei
Apr 30 2014
parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 22:32:33 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On 4/30/14, 10:01 PM, Jonathan M Davis via Digitalmars-d wrote:
 I'm all for parallelizing all unittest blocks that are pure, as
 doing so would be safe, but I think that we're making a big mistake
 if we try and insist that all unittest blocks be able to be run in
 parallel. Any that aren't pure are not guaranteed to be
 parallelizable, and any which access system resources or other
 global, mutable state stand a good chance of breaking.
There are a number of assumptions here: (a) most unittests that can be effectively parallelized can be actually inferred (or declared) as pure; (b) most unittests that cannot be inferred as pure are likely to break; (c) it's a big deal if unittests break. I question all of these assumptions. In particular I consider unittests that depend on one another an effective antipattern that needs to be eradicated.
Even if they don't depend on each other, they can depend on the system. std.file's unit tests will break if we parallelize them, because it operates on files and directories, and many of those tests operate on the same temp directories. That can be fixed by changing the tests, but it will break the tests. Other tests _can't_ be fixed if we force them to run in parallel. For instance, some of std.datetime's unit tests set the local time zone of the system in order to test that LocalTime works correctly. That sets it for the whole program, so all threads will be affected even if they're running other tests. Right now, this isn't a problem, because those tests set the timezone at their start and reset it at their end. But if they were made to run in parallel with any other tests involving LocalTime, there's a good chance that those tests would have random test failures. They simply can't be run in parallel due to a system resource that we can't make thread-local. So, regardless of how we want to mark up unittest blocks as parallelizable or not parallelizable (be it explicit, implict, using pure, or using something else), we do need a way to make it so that a unittest block is not run in parallel with any other unittest block. We can guarantee that pure functions can safely be run in parallel. We _cannot_ guarantee that impure functions can safely be run in parallel. I'm sure that many impure unittest functions could be safely run in parallel, but it would require that the programmer verify that if we don't want undefined behavior - just like programmers have to verify that system code is actually safe. Simply running all unittest blocks in parallel is akin to considering system code safe in a particular piece of code simply because by convention that code should be safe. pure allows us to detect guaranteed, safe parallelizability. If we want to define some other way to make it so a unittest block can be marked as parallelizable regardless of purity, then fine. But automatically parallelizing impure functions means that we're going to have undefined behavior for those unittest functions, and I really think that that is a bad idea - in addition to the fact that some unittest blocks legitimately cannot be run in parallel due to the use of system resources, so parallelizing them _will_ not only break them but make them impossible to write in a way that's not broken without adding mutexes to the unittest blocks to stop the test runner from running them in parallel. And IMHO, if we end up having to do that anywhere, we've done something very wrong with how unit tests work. - Jonathan M Davis
Apr 30 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 11:31 PM, Jonathan M Davis via Digitalmars-d wrote:
 On Wed, 30 Apr 2014 22:32:33 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:
 There are a number of assumptions here: (a) most unittests that can
 be effectively parallelized can be actually inferred (or declared) as
 pure; (b) most unittests that cannot be inferred as pure are likely
 to break; (c) it's a big deal if unittests break. I question all of
 these assumptions. In particular I consider unittests that depend on
 one another an effective antipattern that needs to be eradicated.
Even if they don't depend on each other, they can depend on the system.
Understood, no need to repeat, thanks.
 std.file's unit tests will break if we parallelize them, because it
 operates on files and directories, and many of those tests operate on
 the same temp directories.
Yah, I remember even times when make unittest -j broke unittests because I've used the file name "deleteme" in multiple places. We need to fix those.
 That can be fixed by changing the tests, but
 it will break the tests.
I'm not too worried about breaking tests. I have in mind that we'll display a banner at the beginning of unittesting explaining that tests are ran in parallel and to force serial execution they'd need to set this thing or that. In a way I don't see it as "breakage" in the traditional tests. Unittests are in a way supposed to break :o).
 Other tests _can't_ be fixed if we force them
 to run in parallel. For instance, some of std.datetime's unit tests set
 the local time zone of the system in order to test that LocalTime works
 correctly.
Sure. We could specify that tests are to be run serially within one specific module, or to use classic interlocking in the unittest code. I see it as a problem relatively easy to address.
 We can guarantee that pure functions can safely be run in parallel. We
 _cannot_ guarantee that impure functions can safely be run in parallel.
 I'm sure that many impure unittest functions could be safely run in
 parallel, but it would require that the programmer verify that if we
 don't want undefined behavior - just like programmers have to verify
 that  system code is actually  safe. Simply running all unittest blocks
 in parallel is akin to considering  system code  safe in a particular
 piece of code simply because by convention that code should be  safe.
I don't think undefined behavior is at stake here, and I find the simile invalid. Thread isolation is a done deal in D and we may as well take advantage of it. Worse that could happen is that a unittest sets a global and surprisingly the next one doesn't "see" it. At any rate I think it's pointless to insist on limiting parallel running to pure - let me just say I understood the point (thanks) so there is no need to restate it, and that I think it doesn't take us a good place. Andrei
Apr 30 2014
parent Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 23:56:53 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:
 I don't think undefined behavior is at stake here, and I find the
 simile invalid. Thread isolation is a done deal in D and we may as
 well take advantage of it. Worse that could happen is that a unittest
 sets a global and surprisingly the next one doesn't "see" it.
 
 At any rate I think it's pointless to insist on limiting parallel 
 running to pure - let me just say I understood the point (thanks) so 
 there is no need to restate it, and that I think it doesn't take us a 
 good place.
I'm only arguing for using pure on the grounds that it _guarantees_ that the unittest block is safely parallelizable. If we decide that that guarantee isn't necessary, then we decide that it isn't necessary, though I definitely worry that not having that guarantee will be problematic. I do agree though that D's thread-local by default helps quite a bit in ensuring that most tests will be runnable in parallel. However, if we went with purity to indicate parallelizability, I could easily see doing it implicitly based on purity and allowing for a UDA or somesuch which marked a unittest block as "trusted pure" so that it could be run in parallel. So, I don't think that going with pure would necessarily be too restrictive. It just would require that the programmer do some extra work to be able to treat a unittest block as safely parallelizable when the compiler couldn't guarantee that it was. Ultimately, my biggest concern here is that it be possible to guarantee that a unittest block is not run in parallel with any other unittest block if that particular unittest requires it for any reason, and some folks seem to be arguing that such tests are always invalid, and I want to make sure that we don't ever consider that to be the case for unittest blocks in D. If we do parallel by default and allow for some kind of markup to make a unittest block serial, then that can work. I fully expect that switching to parallel by default would break a number of tests, which I do think is a problem (particularly since a number of those tests will be completely valid), but it could also be an acceptable one - especially if for the most part, the code that it breaks is badly written code. Regardless, we will need to make sure that we message the change clearly in order to ensure that a minimal number of people end up with random test failures due to the change. On a side note, regardless of whether we want to use purity to infer paralellizability, I think that it's very cool that we have the capability to do so if we so choose, whereas most other languages have no way of even coming close to being able to tell whether a function can be safely parallelized or not. The combination of attributes such as pure and compile-time inference is very cool indeed. - Jonathan M Davis
May 01 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. -- Andrei
I recommend running the tests in random order as well. -- /Jacob Carlborg
May 01 2014
next sibling parent reply "w0rp" <devw0rp gmail.com> writes:
On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all 
 unittests. -- Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
May 01 2014
next sibling parent reply Byron <byron.heads gmail.com> writes:
On Thu, 01 May 2014 11:44:11 +0000, w0rp wrote:

 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
Running tests in random order helps find hidden dependencies, but I wouldn't want it as a default. I lot of unittesting libraries offer this. If you don't run tests often it doesn't help much, but if you do TDD it can help.
May 01 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 01 May 2014 09:26:39 -0400, Byron <byron.heads gmail.com> wrote:

 On Thu, 01 May 2014 11:44:11 +0000, w0rp wrote:

 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
Running tests in random order helps find hidden dependencies, but I wouldn't want it as a default. I lot of unittesting libraries offer this. If you don't run tests often it doesn't help much, but if you do TDD it can help.
Note the order of unit tests is defined by druntime. It can easily be modified. -Steve
May 01 2014
prev sibling next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all 
 unittests. -- Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that. Atila
May 01 2014
next sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 01/05/2014 16:01, Atila Neves a écrit :
 On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that. Atila
+1
May 01 2014
parent reply "w0rp" <devw0rp gmail.com> writes:
On Thursday, 1 May 2014 at 17:04:53 UTC, Xavier Bigand wrote:
 Le 01/05/2014 16:01, Atila Neves a écrit :
 On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all 
 unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that. Atila
+1
Tests shouldn't be run in a random order all of the time, perhaps once in a while, manually. Having continuous integration randomly report build failures is crap. Either you should always see a build failure, or you shouldn't see it. You can only test things which are deterministic, at least as far as what you observe. Running tests in a random order should be something you do manually, only when you have some ability to figure out why the tests just failed.
May 01 2014
parent "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 1 May 2014 at 18:38:15 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 17:04:53 UTC, Xavier Bigand wrote:
 Le 01/05/2014 16:01, Atila Neves a écrit :
 On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg 
 wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all 
 unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that. Atila
+1
Tests shouldn't be run in a random order all of the time, perhaps once in a while, manually. Having continuous integration randomly report build failures is crap. Either you should always see a build failure, or you shouldn't see it. You can only test things which are deterministic, at least as far as what you observe. Running tests in a random order should be something you do manually, only when you have some ability to figure out why the tests just failed.
In my experience when a test fails randomly because of ordering, a while loop on the shell running until failure is enough to reproduce it in a few seconds. But as others have mentioned, being able to use a seed to reproduce it exactly is superior. Atila
May 02 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 01 May 2014 10:01:19 -0400, Atila Neves <atila.neves gmail.com>  
wrote:

 On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --  
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that.
But not a great way to debug it. If your test failure depends on ordering, then the next run will be random too. Proposal runtime parameter for pre-main consumption: ./myprog --rndunit[=seed] To run unit tests randomly. Prints out as first order of business the seed value before starting. That way, you can repeat the exact same ordering for debugging. -Steve
May 01 2014
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-05-01 19:12, Steven Schveighoffer wrote:

 But not a great way to debug it.

 If your test failure depends on ordering, then the next run will be
 random too.

 Proposal runtime parameter for pre-main consumption:

 ./myprog --rndunit[=seed]

 To run unit tests randomly. Prints out as first order of business the
 seed value before starting. That way, you can repeat the exact same
 ordering for debugging.
That's exactly what RSpec does. I think it works great. -- /Jacob Carlborg
May 01 2014
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 01/05/2014 18:12, Steven Schveighoffer wrote:
 On Thu, 01 May 2014 10:01:19 -0400, Atila Neves <atila.neves gmail.com>
 wrote:

 On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that.
But not a great way to debug it. If your test failure depends on ordering, then the next run will be random too.
I agree with Steven here. Actually, even if the failure does *not* depend on ordering, it can still be useful to run the tests in order when debugging: If there is a bug in a low level component, that will likely trigger a failure in the tests for that low level component, but also the tests for higher-level components (the components that use the low level component). As such, when debugging, you would want to run the low-level test first since it will likely be easier to debug the issue there, than with the higher-level test. Sure, one could say that the solution to this should be mocking the low-level component in the high-level test, but mocking is not always desirable or practical. I can provide some concrete examples. -- Bruno Medeiros https://twitter.com/brunodomedeiros
May 06 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 4:44 AM, w0rp wrote:
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
I do random testing all the time, and I print the seed of the prng upon startup. When something fails randomly, I take the seed and seed the prng with it to reproduce. -- Andrei
May 01 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 01/05/2014 13:44, w0rp a écrit :
 On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
I am in favor of randomized order, cause it can help to find real bugs.
May 01 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 4:05 AM, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
Great idea! -- Andrei
May 01 2014
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 01 May 2014 11:04:31 -0400, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:

 On 5/1/14, 4:05 AM, Jacob Carlborg wrote:
 On 2014-04-30 23:35, Andrei Alexandrescu wrote:

 Agreed. I think we should look into parallelizing all unittests. --
 Andrei
I recommend running the tests in random order as well.
Great idea! -- Andrei
I think we can configure this at runtime. Imagine, you have multiple failing unit tests. You see the first failure. You find the issue, try and fix the problem, or instrument it, and now a DIFFERENT test fails. Now focus on that one, yet a different one fails. This is just going to equal frustration. If you want to run random, we can do that. If you want to run in order, that also should be possible. In fact, while debugging, you need to run them in order, and serially. -Steve
May 01 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-04-30 23:25, Jonathan M Davis via Digitalmars-d wrote:

 Sure, that helps, but it's trivial to write a unittest block which
 depends on a previous unittest block
There for the tests should be run in random order. -- /Jacob Carlborg
May 01 2014
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Apr 30, 2014 at 02:25:22PM -0700, Jonathan M Davis via Digitalmars-d
wrote:
[...]
 Sure, that helps, but it's trivial to write a unittest block which
 depends on a previous unittest block, and as soon as a unittest block
 uses an external resource such as a socket or file, then even if a
 unittest block doesn't directly depend on the end state of a previous
 unittest block, it still depends on external state which could be
 affected by other unittest blocks.
In this case I'd argue that the test was poorly-written. I can see multiple unittests using, say, the same temp filename for testing file I/O, in which case they shouldn't be parallelized; but if a unittest depends on a file created by a previous unittest, then something is very, very wrong with the unittest. [...]
 I'm inclined to think that marking unittest blocks as pure to
 parallelize them is a good idea, because then the unittest blocks that
 are guaranteed to be parallelizable are run in parallel, whereas those
 that aren't wouldn't be.
Agreed.
 The primary dowside would be that the cases where the programmer knew
 that they could be parallelized but they weren't pure, since those
 unittest blocks wouldn't be parallelized.
[...] Is it a big loss to have *some* unittests non-parallelizable? (I don't know, do we have hard data on this front?) T -- The two rules of success: 1. Don't tell everything you know. -- YHL
Apr 30 2014
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 30 Apr 2014 13:50:10 -0400, Jonathan M Davis via Digitalmars-d  
<digitalmars-d puremagic.com> wrote:

 On Wed, 30 Apr 2014 08:59:42 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests.
int a; unittest { // set up a; } unittest { // use a; } ==> unittest { int a; { // set up a; } { // use a; } } It makes no sense to do it the first way, you are not gaining anything.
 Honestly, the idea of running unit tests in parallel makes me very
 nervous. In general, across modules, I'd expect it to work, but there
 will be occasional cases where it will break.
Then you didn't write your unit-tests correctly. True unit tests-anyway. In fact, the very quality that makes unit tests so valuable (that they are independent of other code) is ruined by sharing state across tests. If you are going to share state, it really is one unit test.
 Across the unittest
 blocks in a single module, I'd be _very_ worried about breakage. There
 is nothing whatsoever in the language which guarantees that running
 them in parallel will work or even makes sense. All that protects us is
 the convention that unit tests are usually independent of each other,
 and in my experience, it's common enough that they're not independent
 that I think that blindly enabling parallelization of unit tests across
 a single module is definitely a bad idea.
I think that if we add the assumption, the resulting fallout would be easy to fix. Note that we can't require unit tests to be pure -- non-pure functions need testing too :) I can imagine that even if you could only parallelize 90% of unit tests, that would be an effective optimization for a large project. In such a case, the rare (and I mean rare to the point of I can't think of a single use-case) need to deny parallelization could be marked. -Steve
Apr 30 2014
parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 20:33:06 -0400
Steven Schveighoffer via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On Wed, 30 Apr 2014 13:50:10 -0400, Jonathan M Davis via
 Digitalmars-d <digitalmars-d puremagic.com> wrote:
 
 On Wed, 30 Apr 2014 08:59:42 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests.
int a; unittest { // set up a; } unittest { // use a; } ==> unittest { int a; { // set up a; } { // use a; } } It makes no sense to do it the first way, you are not gaining anything.
It can make sense to do it the first way when it's more like LargeDocumentOrDatabase foo; unittest { // set up foo; } unittest { // test something using foo } unittest { // do other tests using foo which then take advantage of changes made // by the previous test rather than doing all of those changes to // foo in order to set up this test } In general, I agree that tests shouldn't be done that way, and I don't think that I've ever done it personally, but I've seen it done, and for stuff that requires a fair bit of initialization, it can save time to have each test build on the state of the last. But even if we all agree that that sort of testing is a horrible idea, the language supports it right now, and automatically parallelizing unit tests will break any code that does that.
 Honestly, the idea of running unit tests in parallel makes me very
 nervous. In general, across modules, I'd expect it to work, but
 there will be occasional cases where it will break.
Then you didn't write your unit-tests correctly. True unit tests-anyway. In fact, the very quality that makes unit tests so valuable (that they are independent of other code) is ruined by sharing state across tests. If you are going to share state, it really is one unit test.
All it takes is that tests in two separate modules which have separate functionality access the file system or sockets or some other system resource, and they could end up breaking due to the fact that the other test is messing with the same resource. I'd expect that to be a relatively rare case, but it _can_ happen, so simply parallelizing tests across modules does risk test failures that would not have occurred otherwise.
 Across the unittest
 blocks in a single module, I'd be _very_ worried about breakage.
 There is nothing whatsoever in the language which guarantees that
 running them in parallel will work or even makes sense. All that
 protects us is the convention that unit tests are usually
 independent of each other, and in my experience, it's common enough
 that they're not independent that I think that blindly enabling
 parallelization of unit tests across a single module is definitely
 a bad idea.
I think that if we add the assumption, the resulting fallout would be easy to fix. Note that we can't require unit tests to be pure -- non-pure functions need testing too :)
Sure, they need testing. Just don't test them in parallel, because they're not guaranteed to work in parallel. That guarantee _does_ hold for pure functions, because they don't access global, mutable state. So, we can safely parallelize a unittest block that is pure, but we _can't_ safely paralellize one that isn't - not in a guaranteed way.
 I can imagine that even if you could only parallelize 90% of unit
 tests, that would be an effective optimization for a large project.
 In such a case, the rare (and I mean rare to the point of I can't
 think of a single use-case) need to deny parallelization could be
 marked.
std.file's unit tests would break immediately. It wouldn't surprise me if std.socket's unit tests broke. std.datetime's unit tests would probably break on Posix systems, because some of them temporarily set the local time zone - which sets it for the whole program, not just the current thread (those tests aren't done on Windows, because Windows only lets you set it for the whole OS, not just the program). Any tests which aren't pure risk breakage due to changes in whatever global, mutable state they're accessing. I would strongly argue that automatically parallelizing any unittest block which isn't pure is a bad idea, because it's not guaranteed to work, and it _will_ result in bugs in at least some cases. If we make it so that unittest blocks have their purity inferred (and allow you to mark them as pure to enforce that they be pure if you want to require it), then any unittest blocks which can safely be parallelized will be known, and the test runner could then parallelize those unittest functions and then _not_ parallelize the ones that it can't guarantee are going to work in parallel. So, then we get safe, unittest parallelization without having to insist that folks write their unit tests in a particular way or that they do or don't do particular things in a unit test. And maybe we can add add some sort of UDA to tell the test runner that an impure test can be safely parallelized, but automatically parallelizing impure unittest functions would be akin to automatically treating system functions as safe just because we thought that only safe code should be used in this particular context. - Jonathan M Davis
Apr 30 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 04:50:30 UTC, Jonathan M Davis via 
Digitalmars-d wrote:
 std.file's unit tests would break immediately. It wouldn't 
 surprise me
 if std.socket's unit tests broke. std.datetime's unit tests 
 would
 probably break on Posix systems, because some of them 
 temporarily set
 the local time zone - which sets it for the whole program, not 
 just the
 current thread (those tests aren't done on Windows, because 
 Windows only
 lets you set it for the whole OS, not just the program). Any 
 tests which
 aren't pure risk breakage due to changes in whatever global, 
 mutable
 state they're accessing.
We really should think about separating Phobos tests into unit tests and higher level ones (in separate top-level source folder). The fact that importing std.file in my code with `rdmd -unittest` may trigger file I/O makes me _extremely_ disgusted. How did we even get here? >_<
May 01 2014
parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 01 May 2014 07:26:59 +0000
Dicebot via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Thursday, 1 May 2014 at 04:50:30 UTC, Jonathan M Davis via 
 Digitalmars-d wrote:
 std.file's unit tests would break immediately. It wouldn't 
 surprise me
 if std.socket's unit tests broke. std.datetime's unit tests 
 would
 probably break on Posix systems, because some of them 
 temporarily set
 the local time zone - which sets it for the whole program, not 
 just the
 current thread (those tests aren't done on Windows, because 
 Windows only
 lets you set it for the whole OS, not just the program). Any 
 tests which
 aren't pure risk breakage due to changes in whatever global, 
 mutable
 state they're accessing.
We really should think about separating Phobos tests into unit tests and higher level ones (in separate top-level source folder). The fact that importing std.file in my code with `rdmd -unittest` may trigger file I/O makes me _extremely_ disgusted. How did we even get here? >_<
Honestly, I see no problem with std.file's unit tests triggering I/O. That's what the module _does_. And it specifically uses the system's temp directory so that it doesn't screw with anything else on the system. Separating the tests out into some other set of tests wouldn't buy us anything IMHO. The tests need to be run regardless, and they need to be run with the same frequency regardless. Splitting those tests out would just make them harder for developers to run, because now they'd have to worry about running two sets of tests instead of just one. As far as I can see, splitting out tests that do I/O would be purely for ideological reasons and would be of no practical benefit. In fact, it would be _less_ practical if we were to do so. - Jonathan M Davis
May 01 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 07:47:27 UTC, Jonathan M Davis via 
Digitalmars-d wrote:
 Honestly, I see no problem with std.file's unit tests 
 triggering I/O.
 That's what the module _does_. And it specifically uses the 
 system's
 temp directory so that it doesn't screw with anything else on 
 the
 system. Separating the tests out into some other set of tests 
 wouldn't
 buy us anything IMHO. The tests need to be run regardless, and 
 they
 need to be run with the same frequency regardless. Splitting 
 those
 tests out would just make them harder for developers to run, 
 because
 now they'd have to worry about running two sets of tests 
 instead of
 just one. As far as I can see, splitting out tests that do I/O 
 would be
 purely for ideological reasons and would be of no practical 
 benefit. In
 fact, it would be _less_ practical if we were to do so.

 - Jonathan M Davis
I have just recently went through some of out internal projects removing all accidental I/O tests for the very reason that /tmp was full and those tests were randomly failing when testing my own program that have used the library. This _sucks_. You can't do any test with I/O without verifying the environment (free space, concurrent access from other processes, file system access etc). And once you do it properly such beast has no longer fits into the same module because of sheer size. There is a very practical reason to separate tests - to become sure that you always can run -unittest build to verify basic sanity of your program and it will never spuriously fail or take eternity to complete.
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal projects removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
May 01 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu 
wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal 
 projects removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics? Including automated test servers? Really? Bunch of stuff can go wrong. Higher level tests verify their expectations from environment (if written carefully), which is impractical to do in unit tests. Also such reliance on environment makes running in parallel impossible without explicit resource dependency tracking for any kind of bigger test suite.
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal projects removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
May 01 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu 
wrote:
 On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu 
 wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal 
 projects removing
 all accidental I/O tests for the very reason that /tmp was 
 full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
It got full because of tests (surprise!). Your actions?
May 01 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 9:07 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal projects
 removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
It got full because of tests (surprise!). Your actions?
Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei
May 01 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu 
wrote:
 It got full because of tests (surprise!). Your actions?
Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei
You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O
May 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/5/14, 8:11 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu wrote:
 It got full because of tests (surprise!). Your actions?
Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei
You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O
I know, incidentally the hhvm team has had the same problem two weeks ago. They fixed it (wthout removing file I/O from unittests). It's fixable. That's it. This segment started with your claim that unittests should do no file I/O because they may fail with a full /tmp/. I disagree with that, and with framing the full /tmp/ problem as a problem with the unittests doing file I/O. Andrei
May 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 5 May 2014 at 15:36:19 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 8:11 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 16:08:23 UTC, Andrei Alexandrescu 
 wrote:
 It got full because of tests (surprise!). Your actions?
Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei
You can't. You have not control over that machine you don't even exactly know that test has failed because of full /tmp/ - all you got is a bug report that can't be reproduced on your machine. It is not that simple already and it can get damn complicated once you get to something like network I/O
I know, incidentally the hhvm team has had the same problem two weeks ago. They fixed it (wthout removing file I/O from unittests). It's fixable. That's it.
It is possible to write a unit test which provides graceful failure reporting for such issues but once you get there it becomes hard to see actual tests behind boilerplate of environmental verification and actual application code behind tests. Any tests that rely on I/O need some sort of commonly repeated initialize-verify-test-finalize pattern, one that is simply impractical to do with unit tests.
 This segment started with your claim that unittests should do 
 no file I/O because they may fail with a full /tmp/. I disagree 
 with that, and with framing the full /tmp/ problem as a problem 
 with the unittests doing file I/O.
It was just a most simple example. "Unittests should do no I/O because any sort of I/O can fail because of reasons you don't control from the test suite" is an appropriate generalization of my statement. Full /tmp is not a problem, there is nothing broken about system with full /tmp. Problem is test reporting that is unable to connect failure with /tmp being full unless you do environment verification.
May 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/5/14, 8:55 AM, Dicebot wrote:
 It was just a most simple example. "Unittests should do no I/O because
 any sort of I/O can fail because of reasons you don't control from the
 test suite" is an appropriate generalization of my statement.

 Full /tmp is not a problem, there is nothing broken about system with
 full /tmp. Problem is test reporting that is unable to connect failure
 with /tmp being full unless you do environment verification.
Different strokes for different folks. -- Andrei
May 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 8:55 AM, Dicebot wrote:
 It was just a most simple example. "Unittests should do no I/O 
 because
 any sort of I/O can fail because of reasons you don't control 
 from the
 test suite" is an appropriate generalization of my statement.

 Full /tmp is not a problem, there is nothing broken about 
 system with
 full /tmp. Problem is test reporting that is unable to connect 
 failure
 with /tmp being full unless you do environment verification.
Different strokes for different folks. -- Andrei
There is nothing subjective about it. It is a very well-define practical goal - getting either reproducible or informative reports for test failures from machines you don't have routine access to. Why still keeping test sources maintainable (ok this part is subjective). It is relatively simple engineering problem but you discard widely adopted solution for it (strict control of test requirements) without proposing any real alternative. "I will yell at someone when it breaks" is not really a solution.
May 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/5/14, 10:08 AM, Dicebot wrote:
 On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 8:55 AM, Dicebot wrote:
 It was just a most simple example. "Unittests should do no I/O because
 any sort of I/O can fail because of reasons you don't control from the
 test suite" is an appropriate generalization of my statement.

 Full /tmp is not a problem, there is nothing broken about system with
 full /tmp. Problem is test reporting that is unable to connect failure
 with /tmp being full unless you do environment verification.
Different strokes for different folks. -- Andrei
There is nothing subjective about it.
Of course there is. -- Andrei
May 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 10:08 AM, Dicebot wrote:
 On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu 
 wrote:
 On 5/5/14, 8:55 AM, Dicebot wrote:
 It was just a most simple example. "Unittests should do no 
 I/O because
 any sort of I/O can fail because of reasons you don't 
 control from the
 test suite" is an appropriate generalization of my statement.

 Full /tmp is not a problem, there is nothing broken about 
 system with
 full /tmp. Problem is test reporting that is unable to 
 connect failure
 with /tmp being full unless you do environment verification.
Different strokes for different folks. -- Andrei
There is nothing subjective about it.
Of course there is. -- Andrei
You are not helping your point to look reasonable.
May 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/5/14, 11:25 AM, Dicebot wrote:
 On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 10:08 AM, Dicebot wrote:
 On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 8:55 AM, Dicebot wrote:
 It was just a most simple example. "Unittests should do no I/O because
 any sort of I/O can fail because of reasons you don't control from the
 test suite" is an appropriate generalization of my statement.

 Full /tmp is not a problem, there is nothing broken about system with
 full /tmp. Problem is test reporting that is unable to connect failure
 with /tmp being full unless you do environment verification.
Different strokes for different folks. -- Andrei
There is nothing subjective about it.
Of course there is. -- Andrei
You are not helping your point to look reasonable.
My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks. Andrei
May 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote:
 My understanding here is you're trying to make dogma out of 
 engineering choices that may vary widely across projects and 
 organizations. No thanks.

 Andrei
I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem. Dogmatic approach that solves the issue is still better than ignoring it completely. Right now I am afraid you will push for quick changes that will reduce elegant simplicity of D unittest system without providing a sound replacement that will actually fit into more ambitious use cases (as whole "parallel" thing implies).
May 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/5/14, 11:47 AM, Dicebot wrote:
 On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote:
 My understanding here is you're trying to make dogma out of
 engineering choices that may vary widely across projects and
 organizations. No thanks.

 Andrei
I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem.
"Clean /tmp/ judiciously."
 Dogmatic approach that
 solves the issue is still better than ignoring it completely.
The problem with your stance, i.e.:
 "Unittests should do no I/O because any sort of I/O can fail because
 of reasons you don't control from the test suite" is an appropriate
 generalization of my statement.
is that it immediately generalizes into the unreasonable: "Unittests should do no $X because any sort of $X can fail because of reasons you don't control from the test suite". So that gets into machines not having any memory available, with full disks etc. Just make sure test machines are prepared for running unittests to the extent unittests are expecting them to. We're wasting time trying to frame this as a problem purely related to unittests alone.
 Right now I am afraid you will push for quick changes that will reduce
 elegant simplicity of D unittest system without providing a sound
 replacement that will actually fit into more ambitious use cases (as
 whole "parallel" thing implies).
If I had my way I'd make parallel the default and single-threaded opt-in, thus penalizing unittests that had issues to start with. But I understand the merits of not breaking backwards compatibility so probably we should start with opt-in parallel unittesting. Andrei
May 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 5 May 2014 at 18:58:37 UTC, Andrei Alexandrescu wrote:
 On 5/5/14, 11:47 AM, Dicebot wrote:
 On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu 
 wrote:
 My understanding here is you're trying to make dogma out of
 engineering choices that may vary widely across projects and
 organizations. No thanks.

 Andrei
I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem.
"Clean /tmp/ judiciously."
This is solution for "failing test" problem. Problem I speak about is "figuring out why test has failed".
 The problem with your stance, i.e.:

 "Unittests should do no I/O because any sort of I/O can fail 
 because
 of reasons you don't control from the test suite" is an 
 appropriate
 generalization of my statement.
is that it immediately generalizes into the unreasonable: "Unittests should do no $X because any sort of $X can fail because of reasons you don't control from the test suite". So that gets into machines not having any memory available, with full disks etc.
It is great you have mentioned RAM here as it nicely draws a border-line. Being out of memory throws specific Error which is unlikely to be caught and clearly identifies problem. Disk I/O failure throws Exception which can be easily consumed somewhere inside tested control flow resulting in absolutely mysterious test failures. It is borderline of Error vs Exception - fatal problem incompatible with further execution and routine problem application is expected to handle.
 Just make sure test machines are prepared for running unittests 
 to the extent unittests are expecting them to. We're wasting 
 time trying to frame this as a problem purely related to 
 unittests alone.
Again: you don't have control of test machines for something like language standard library. It is not purely unittest problem, it is problem hard to solve staying within infrastructure of unittests.
May 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/6/14, 10:43 AM, Dicebot wrote:
 Disk I/O failure throws Exception which can be easily consumed somewhere
 inside tested control flow resulting in absolutely mysterious test failures.
If you're pointing out full /tmp/ should be nicely diagnosed by the unittest, I agree. -- Andrei
May 06 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 6 May 2014 at 18:13:01 UTC, Andrei Alexandrescu wrote:
 On 5/6/14, 10:43 AM, Dicebot wrote:
 Disk I/O failure throws Exception which can be easily consumed 
 somewhere
 inside tested control flow resulting in absolutely mysterious 
 test failures.
If you're pointing out full /tmp/ should be nicely diagnosed by the unittest, I agree. -- Andrei
Good we have a common base ground :) It inevitably arises next question though : how unittest can diagnose it? Catching exception is not always possible as it can be consumed inside tested function resulting in different observable behavior when /tmp/ is full. I can't imagine anything better than verifying /tmp/ is not full before running bunch of tests. Will you agree with this one too?
May 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/6/14, 11:27 AM, Dicebot wrote:
 On Tuesday, 6 May 2014 at 18:13:01 UTC, Andrei Alexandrescu wrote:
 On 5/6/14, 10:43 AM, Dicebot wrote:
 Disk I/O failure throws Exception which can be easily consumed somewhere
 inside tested control flow resulting in absolutely mysterious test
 failures.
If you're pointing out full /tmp/ should be nicely diagnosed by the unittest, I agree. -- Andrei
Good we have a common base ground :) It inevitably arises next question though : how unittest can diagnose it?
Fail with diagnostic. -- Andrei
May 06 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Tuesday, 6 May 2014 at 20:41:01 UTC, Andrei Alexandrescu wrote:
 Fail with diagnostic. -- Andrei
..and do that for every single test case which is affected. Which requires either clear test execution order (including cross-module test dependencies) or shared boilerplate (which becomes more messy if more environment needs to tested). Something that is not nicely supported by built-in construct.
May 07 2014
prev sibling parent "Dicebot" <public dicebot.lv> writes:
On Tuesday, 6 May 2014 at 20:41:01 UTC, Andrei Alexandrescu wrote:
 On 5/6/14, 11:27 AM, Dicebot wrote:
 On Tuesday, 6 May 2014 at 18:13:01 UTC, Andrei Alexandrescu 
 wrote:
 On 5/6/14, 10:43 AM, Dicebot wrote:
 Disk I/O failure throws Exception which can be easily 
 consumed somewhere
 inside tested control flow resulting in absolutely 
 mysterious test
 failures.
If you're pointing out full /tmp/ should be nicely diagnosed by the unittest, I agree. -- Andrei
Good we have a common base ground :) It inevitably arises next question though : how unittest can diagnose it?
Fail with diagnostic. -- Andrei
This is funnily ambiguous statement :)
May 08 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public dicebot.lv> wrote:

 On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal projects  
 removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
It got full because of tests (surprise!). Your actions?
It would be nice to have a uniform mechanism to get a unique system-dependent file location for each specific unit test. The file should automatically delete itself at the end of the test. -Steve
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 10:09 AM, Steven Schveighoffer wrote:
 On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public dicebot.lv> wrote:

 On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal projects
 removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
It got full because of tests (surprise!). Your actions?
It would be nice to have a uniform mechanism to get a unique system-dependent file location for each specific unit test. The file should automatically delete itself at the end of the test.
Looks like /tmp (%TEMP% or C:\TEMP in Windows) in conjunction with the likes of mkstemp is what you're looking for :o). Andrei
May 01 2014
next sibling parent reply "Brad Anderson" <eco gnuk.net> writes:
On Thursday, 1 May 2014 at 17:24:58 UTC, Andrei Alexandrescu 
wrote:
 On 5/1/14, 10:09 AM, Steven Schveighoffer wrote:
 On Thu, 01 May 2014 12:07:19 -0400, Dicebot 
 <public dicebot.lv> wrote:

 On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu 
 wrote:
 On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei 
 Alexandrescu wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal 
 projects
 removing
 all accidental I/O tests for the very reason that /tmp 
 was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
It got full because of tests (surprise!). Your actions?
It would be nice to have a uniform mechanism to get a unique system-dependent file location for each specific unit test. The file should automatically delete itself at the end of the test.
Looks like /tmp (%TEMP% or C:\TEMP in Windows) in conjunction with the likes of mkstemp is what you're looking for :o). Andrei
It hasn't been C:\TEMP for almost 13 years (before Windows XP which is now also end-of-life). Use GetTempPath. http://msdn.microsoft.com/en-us/library/windows/desktop/aa364992(v=vs.85).aspx
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 10:32 AM, Brad Anderson wrote:
 It hasn't been C:\TEMP for almost 13 years
About the time when I switched :o). -- Andrei
May 01 2014
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 1 May 2014 18:40, "Andrei Alexandrescu via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 5/1/14, 10:32 AM, Brad Anderson wrote:
 It hasn't been C:\TEMP for almost 13 years
About the time when I switched :o). -- Andrei
Amen to that! (Me too)
May 05 2014
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 01 May 2014 13:25:00 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 5/1/14, 10:09 AM, Steven Schveighoffer wrote:
 On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public dicebot.lv> wrote:

 On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 8:04 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 1:34 AM, Dicebot wrote:
 I have just recently went through some of out internal projects
 removing
 all accidental I/O tests for the very reason that /tmp was full
Well a bunch of stuff will not work on a full /tmp. Sorry, hard to elicit empathy with a full /tmp :o). -- Andrei
So you are OK with your unit tests failing randomly with no clear diagnostics?
I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
It got full because of tests (surprise!). Your actions?
It would be nice to have a uniform mechanism to get a unique system-dependent file location for each specific unit test. The file should automatically delete itself at the end of the test.
Looks like /tmp (%TEMP% or C:\TEMP in Windows) in conjunction with the likes of mkstemp is what you're looking for :o).
No, I'm looking for unittest_getTempFile(Line = __LINE__, File = __FILE__)(), which handles all the magic of opening a temporary file, allowing me to use it for the unit test, and then closing and deleting it at the end, when the test passes. -Steve
May 01 2014
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 01 May 2014 00:49:53 -0400, Jonathan M Davis via Digitalmars-d  
<digitalmars-d puremagic.com> wrote:

 On Wed, 30 Apr 2014 20:33:06 -0400
 Steven Schveighoffer via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:

 On Wed, 30 Apr 2014 13:50:10 -0400, Jonathan M Davis via
 Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Wed, 30 Apr 2014 08:59:42 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests.
int a; unittest { // set up a; } unittest { // use a; } ==> unittest { int a; { // set up a; } { // use a; } } It makes no sense to do it the first way, you are not gaining anything.
It can make sense to do it the first way when it's more like LargeDocumentOrDatabase foo; unittest { // set up foo; } unittest { // test something using foo } unittest { // do other tests using foo which then take advantage of changes made // by the previous test rather than doing all of those changes to // foo in order to set up this test } In general, I agree that tests shouldn't be done that way, and I don't think that I've ever done it personally, but I've seen it done, and for stuff that requires a fair bit of initialization, it can save time to have each test build on the state of the last. But even if we all agree that that sort of testing is a horrible idea, the language supports it right now, and automatically parallelizing unit tests will break any code that does that.
I recommend optimizing using a function. i.e.: version(unittest) { LargeDocumentOrDatabase foo; auto getFoo() {/* check to see if foo is set up, return it*/} } I understand what you are saying. I think the largest problem with parallelizing unit tests is that people haven't been careful to make sure that's possible. Now they should, or face the consequences. The point I was making, however, is that within a module, you can choose whether you want parallel or serial unit tests. If you want parallel, separate them into multiple unittest blocks. If you want serial, put them in one. For the super-rare cases where it needs to be serial, put them in one. It's not hard.
 Honestly, the idea of running unit tests in parallel makes me very
 nervous. In general, across modules, I'd expect it to work, but
 there will be occasional cases where it will break.
Then you didn't write your unit-tests correctly. True unit tests-anyway. In fact, the very quality that makes unit tests so valuable (that they are independent of other code) is ruined by sharing state across tests. If you are going to share state, it really is one unit test.
All it takes is that tests in two separate modules which have separate functionality access the file system or sockets or some other system resource, and they could end up breaking due to the fact that the other test is messing with the same resource. I'd expect that to be a relatively rare case, but it _can_ happen, so simply parallelizing tests across modules does risk test failures that would not have occurred otherwise.
Right, and with the knowledge that unit tests are being run in parallel, one can trivially change their design to fix the problem. I agree my assumptions were not what you were thinking of. I wasn't thinking of shared system resources. But this isn't too difficult to figure out. I do think there should be a way to mark a unit test as "don't parallelize this".
 Across the unittest
 blocks in a single module, I'd be _very_ worried about breakage.
 There is nothing whatsoever in the language which guarantees that
 running them in parallel will work or even makes sense. All that
 protects us is the convention that unit tests are usually
 independent of each other, and in my experience, it's common enough
 that they're not independent that I think that blindly enabling
 parallelization of unit tests across a single module is definitely
 a bad idea.
I think that if we add the assumption, the resulting fallout would be easy to fix. Note that we can't require unit tests to be pure -- non-pure functions need testing too :)
Sure, they need testing. Just don't test them in parallel, because they're not guaranteed to work in parallel. That guarantee _does_ hold for pure functions, because they don't access global, mutable state. So, we can safely parallelize a unittest block that is pure, but we _can't_ safely paralellize one that isn't - not in a guaranteed way.
A function may be impure, but run in a pure way.
 I can imagine that even if you could only parallelize 90% of unit
 tests, that would be an effective optimization for a large project.
 In such a case, the rare (and I mean rare to the point of I can't
 think of a single use-case) need to deny parallelization could be
 marked.
std.file's unit tests would break immediately. It wouldn't surprise me if std.socket's unit tests broke. std.datetime's unit tests would probably break on Posix systems, because some of them temporarily set the local time zone - which sets it for the whole program, not just the current thread (those tests aren't done on Windows, because Windows only lets you set it for the whole OS, not just the program). Any tests which aren't pure risk breakage due to changes in whatever global, mutable state they're accessing.
It depends on what the impure function is doing. Anything that's using the same temporary file should not be, that's easy. Anything that's using the same socket port should not be, that's easy. Anything that requires using the local time zone should be done in a single unit test. Most everything in std.datetime should use a defined time zone instead of local time. Unit tests should be as decoupled from system/global state as possible.
 I would strongly argue that automatically parallelizing any unittest
 block which isn't pure is a bad idea, because it's not guaranteed to
 work, and it _will_ result in bugs in at least some cases. If we make it
 so that unittest blocks have their purity inferred (and allow you to
 mark them as pure to enforce that they be pure if you want to require
 it), then any unittest blocks which can safely be parallelized will be
 known, and the test runner could then parallelize those unittest
 functions and then _not_ parallelize the ones that it can't guarantee
 are going to work in parallel.
It would be at least a start, and I agree it would be an easy way to add some level of parallelism without breaking any existing tests. But I fear that there are many things that would be excluded incorrectly. Take for example, std.datetime. The constructor for SysTime has this line in it: _timezone = tz is null ? LocalTime() : tz; All unit tests that pass in a specific tz (such as UTC) could be pure calls. But because of that line, they can't be!
 So, then we get safe, unittest parallelization without having to insist
 that folks write their unit tests in a particular way or that they do or
 don't do particular things in a unit test. And maybe we can add add
 some sort of UDA to tell the test runner that an impure test can be
 safely parallelized, but automatically parallelizing impure unittest
 functions would be akin to automatically treating  system functions as
  safe just because we thought that only  safe code should be used in
 this particular context.
As I pointed out to Andrei, there is an enhancement, with a pull request even, that would potentially add the ability to detect UDAs on unit tests, giving us the ability to name them or to mark them however we want, so the runtime can take appropriate action. https://issues.dlang.org/show_bug.cgi?id=10023 -Steve
May 01 2014
parent Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 01 May 2014 10:42:54 -0400
Steven Schveighoffer via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On Thu, 01 May 2014 00:49:53 -0400, Jonathan M Davis via
 Digitalmars-d <digitalmars-d puremagic.com> wrote:
 
 On Wed, 30 Apr 2014 20:33:06 -0400
 Steven Schveighoffer via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:
 I do think there should be a way to mark a unit test as "don't
 parallelize this".
Regardless what our exact solution is, a key thing is that we need to be able have both tests which are run in parallel and tests which are run in serial. Switching to parallel by default will break code, but that may be acceptable. And I'm somewhat concerned about automatically parallelizing unit tests which aren't pure just because it's still trivial to write unittest blocks that aren't safely parallelizable (even if most such examples typically aren't good practice) whereas they'd work just fine now. But ultimately, my main concern is that we not enforce that all unit tests be parallelized, because that precludes certain types of tests.
 A function may be impure, but run in a pure way.
True. The idea behind using purity is that it guarantees that the unittest blocks would be safely parallelizable. But even if we were to go with purity, that doesn't preclude having some way to mark a unittest as parallelizable in spite of its lack of purity. It just wouldn't be automatic.
 Anything that requires using the local time zone should be done in a
 single unit test. Most everything in std.datetime should use a
 defined time zone instead of local time.
Because LocalTime is the default timezone, most of the tests use it. In general, I think that that's fine and desirable, because LocalTime is what most everyone is going to be using. Where I think that it actually ends up being a problem (and will eventually necessitate that I rewrite a number of the tests - possibly most of them) is when tests end up making assumptions that can break in certain time zones. So, in the long run, I expect that far fewer tests will use LocalTime than is currently the case, but I don't think that I agree that it should be avoided on quite the level that you seem to. It is on my todo list though to go over std.datetime's unit tests and make it stop using LocalTime where that will result in the tests failing in some time zones.
 Take for example, std.datetime. The constructor for SysTime has this
 line in it:
 
 _timezone = tz is null ? LocalTime() : tz;
 
 All unit tests that pass in a specific tz (such as UTC) could be
 pure calls. But because of that line, they can't be!
Pretty much nothing involving SysTime is pure, because adjTime can't be pure, because LocalTime's conversion functions can't be pure, because it calls the system's functions to do the conversions. So, very few of SysTime's unit tests could be parallelized based on purity. The constructor is just one of many places where SysTime can't be pure. So, it's an example of the types of tests that would have to be marked as explicitly parallelizable if we used purity as a means of determining automatic parallelizabitily. - Jonathan M Davis
May 01 2014
prev sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 19:50, Jonathan M Davis via Digitalmars-d a crit :
 On Wed, 30 Apr 2014 08:59:42 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:

 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in
 parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
In general, I agree. In reality, there are times when having state across unit tests makes sense - especially when there's expensive setup required for the tests. While it's not something that I generally like to do, I know that we have instances of that where I work. Also, if the unit tests have to deal with shared resources, they may very well be theoretically independent but would run afoul of each other if run at the same time - a prime example of this would be std.file, which has to operate on the file system. I fully expect that if std.file's unit tests were run in parallel, they would break. Unit tests involving sockets would be another type of test which would be at high risk of breaking, depending on what sockets they need. Honestly, the idea of running unit tests in parallel makes me very nervous. In general, across modules, I'd expect it to work, but there will be occasional cases where it will break. Across the unittest blocks in a single module, I'd be _very_ worried about breakage. There is nothing whatsoever in the language which guarantees that running them in parallel will work or even makes sense. All that protects us is the convention that unit tests are usually independent of each other, and in my experience, it's common enough that they're not independent that I think that blindly enabling parallelization of unit tests across a single module is definitely a bad idea. - Jonathan M Davis
I shared this kind of experience too. pure unittest name {} seems a good idea, and it's more intuitive to have the same behaviour of other functions with a closer signature.
Apr 30 2014
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Andrei Alexandrescu"  wrote in message 
news:ljr6ld$1mft$2 digitalmars.com...

 This doesn't follow. All unittests should be executable concurrently. -- 
 Andrei
That's like saying all makefiles should work with -j
Apr 30 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 11:13 AM, Daniel Murphy wrote:
 "Andrei Alexandrescu"  wrote in message
 news:ljr6ld$1mft$2 digitalmars.com...

 This doesn't follow. All unittests should be executable concurrently.
 -- Andrei
That's like saying all makefiles should work with -j
They should. -- Andrei
Apr 30 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 2014-04-30 at 10:50 -0700, Jonathan M Davis via Digitalmars-d
wrote:
[…]
 In general, I agree. In reality, there are times when having state
 across unit tests makes sense - especially when there's expensive setup
 required for the tests. While it's not something that I generally
 like to do, I know that we have instances of that where I work. Also, if
 the unit tests have to deal with shared resources, they may very well be
 theoretically independent but would run afoul of each other if run at
 the same time - a prime example of this would be std.file, which has to
 operate on the file system. I fully expect that if std.file's unit
 tests were run in parallel, they would break. Unit tests involving
 sockets would be another type of test which would be at high risk of
 breaking, depending on what sockets they need.
Surely if there is expensive set up you are doing an integration or system test not a unit test. In a unit test all expensive set up should be mocked out.
 Honestly, the idea of running unit tests in parallel makes me very
 nervous. In general, across modules, I'd expect it to work, but there
 will be occasional cases where it will break. Across the unittest
 blocks in a single module, I'd be _very_ worried about breakage. There
 is nothing whatsoever in the language which guarantees that running
 them in parallel will work or even makes sense. All that protects us is
 the convention that unit tests are usually independent of each other,
 and in my experience, it's common enough that they're not independent
 that I think that blindly enabling parallelization of unit tests across
 a single module is definitely a bad idea.
All tests should be independent, therefore there should be no problem executing all tests at the same time and/or in any order. If tests have to be executed in a specific order then they are not separate tests and should be merged into a single test — which likely means they are integration or system tests not unit tests. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Apr 30 2014
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 17:59, Andrei Alexandrescu a écrit :
 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
But sometimes unittests have to use shared data that need to be initialized before them. File system operations are generally a critical point, if many unittest are based on same auto-generated file data it's a good idea to run this generation once before all tests (they eventually do a file copy that is fast with copy-on-write file system or those data can be used as read only by all tests). So for those kind of situation some functions have must be able to run before unittest, and I think that the case of static this() function of modules?
Apr 30 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 6:20 PM, Xavier Bigand wrote:
 Le 30/04/2014 17:59, Andrei Alexandrescu a écrit :
 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
But sometimes unittests have to use shared data that need to be initialized before them. File system operations are generally a critical point, if many unittest are based on same auto-generated file data it's a good idea to run this generation once before all tests (they eventually do a file copy that is fast with copy-on-write file system or those data can be used as read only by all tests). So for those kind of situation some functions have must be able to run before unittest, and I think that the case of static this() function of modules?
Yah version(unittest) static shared this() { ... } covers that. -- Andrei
Apr 30 2014
parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 01/05/2014 03:54, Andrei Alexandrescu a écrit :
 On 4/30/14, 6:20 PM, Xavier Bigand wrote:
 Le 30/04/2014 17:59, Andrei Alexandrescu a écrit :
 On 4/30/14, 8:54 AM, bearophile wrote:
 Andrei Alexandrescu:

 A coworker mentioned the idea that unittests could be run in parallel
In D we have strong purity to make more safe to run code in parallel: pure unittest {}
This doesn't follow. All unittests should be executable concurrently. -- Andrei
But sometimes unittests have to use shared data that need to be initialized before them. File system operations are generally a critical point, if many unittest are based on same auto-generated file data it's a good idea to run this generation once before all tests (they eventually do a file copy that is fast with copy-on-write file system or those data can be used as read only by all tests). So for those kind of situation some functions have must be able to run before unittest, and I think that the case of static this() function of modules?
Yah version(unittest) static shared this() { ... } covers that. -- Andrei
Then I am pretty much ok with the parallelization of all unittests. It stay the question of name, I don't really know if it have to be in the language or in phobos like other tests features (test-logger, benchmark,...).
Apr 30 2014
prev sibling next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 30 April 2014 at 15:54:42 UTC, bearophile wrote:
 We've resisted named unittests but I think there's enough
 evidence to make the change.
Yes, the optional name for unittests is an improvement: unittest {} unittest foo {} I am very glad your coworker find such usability problems :-)
If we do "name" the unittests, then can we name them with strings? No need to polute namespace with ugly symbols. Also: //---- unittest "Sort: Non-Lvalue RA range" { ... } //---- vs //---- unittest SortNonLvalueRARange { ... } //----
Apr 30 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
monarch_dodra:

 If we do "name" the unittests, then can we name them with 
 strings? No need to polute namespace with ugly symbols.
Are UDAs enough? uname("foo") unittest {} What I'd like is to tie one or more unittests to other entities, like all the unittests of a specific function. Bye, bearophile
Apr 30 2014
next sibling parent Andrej Mitrovic via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 4/30/14, bearophile via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 What I'd like is to tie one or more unittests to other entities,
 like all the unittests of a specific function.
This would also lead to a more stable documented unittest feature and the ability to declare documented unittests outside the scope of the target symbol. E.g. if you have a templated aggregate and a function inside it you may want to add a single documented unittest for the function /outside/ the aggregate, otherwise it will get compiled for every unique instance of that aggregate.
Apr 30 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-04-30 21:12, bearophile wrote:

 Are UDAs enough?

  uname("foo") unittest {}

 What I'd like is to tie one or more unittests to other entities, like
 all the unittests of a specific function.
Something similar is done in RSpec. A BDD framework in Ruby. In D it might look like this: describe("foo") { it("does something useful") unittest { } it("also does some other stuff") unittest { } } -- /Jacob Carlborg
May 01 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 11:53 AM, monarch_dodra wrote:
 On Wednesday, 30 April 2014 at 15:54:42 UTC, bearophile wrote:
 We've resisted named unittests but I think there's enough
 evidence to make the change.
Yes, the optional name for unittests is an improvement: unittest {} unittest foo {} I am very glad your coworker find such usability problems :-)
If we do "name" the unittests, then can we name them with strings? No need to polute namespace with ugly symbols. Also: //---- unittest "Sort: Non-Lvalue RA range" { ... } //---- vs //---- unittest SortNonLvalueRARange { ... } //----
I'd argue for regular identifiers instead of strings - they can be seen in stack traces, accessed with __FUNCTION__ etc. -- Andrei
Apr 30 2014
parent Johannes Pfau <nospam example.com> writes:
Am Wed, 30 Apr 2014 13:31:30 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 I'd argue for regular identifiers instead of strings - they can be
 seen in stack traces, accessed with __FUNCTION__ etc. -- Andrei
If we actually want to make unittests work just like functions (__FUNCTION__, identifier which are visible in stacktraces) then we could also simply declare functions and mark them as ( )unittest: unittest void myUnittest() { } This then allows for further improvements in the future, for example a unit test can then return a result value (skipped, error, ..) or it could optionally receive parameters (like some kind of state from a unittest framework)
May 01 2014
prev sibling parent Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 30 Apr 2014 18:53:22 +0000
monarch_dodra via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Wednesday, 30 April 2014 at 15:54:42 UTC, bearophile wrote:
 We've resisted named unittests but I think there's enough
 evidence to make the change.
Yes, the optional name for unittests is an improvement: unittest {} unittest foo {} I am very glad your coworker find such usability problems :-)
If we do "name" the unittests, then can we name them with strings? No need to polute namespace with ugly symbols. Also: //---- unittest "Sort: Non-Lvalue RA range" { ... } //---- vs //---- unittest SortNonLvalueRARange { ... } //----
It would be simple enough to avoid polluting the namespace. IIRC, right now, the unittest blocks get named after the line number that they're on. All we'd have to do is change it so that their name included the name given by the programmer rather than being the name given by the programmer. e.g. unittest(testFoo) { } results in a function called something like unittest_testFoo. - Jonathan M Davis
Apr 30 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/30/2014 8:54 AM, bearophile wrote:
 I'd also like some built-in way (or partially built-in) to use a module only as
 "main module" (to run its demos) or as module to be imported. This problem is
 solved in Python with the "if __name__ == "__main__":" idiom.
dmd foo.d -unittest -main
Apr 30 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 On 4/30/2014 8:54 AM, bearophile wrote:
 I'd also like some built-in way (or partially built-in) to use 
 a module only as
 "main module" (to run its demos) or as module to be imported. 
 This problem is
 solved in Python with the "if __name__ == "__main__":" idiom.
dmd foo.d -unittest -main
I think you are greatly missing the point. The unittests are mainly for the developer, while the demo is for the user of the demo. The unittests validate the code, while the demo shows how to use the module (and surely the demo is not meant to run in parallel with the other unittests). Sometimes the demo part is more than a demo, it contains usable and useful code, with a command-line interface, to allow stand alone usage of the module functionality. Currently this is how I do it: module foobar; // Here functions and // classes with their unittests version (foobar_main) { void main() { // foobar module demo code here. } } I can't use just "version(main)", because the module could import other modules with their own demo sections. So I need something to tell them apart from each other. This could be simplified with a solution similar to the Python one: version (__is_main) { void main() { // foobar module demo code here. } } Now if I have module A and B, where B imports A, and both have the demo main, I can use this to compile A stand-alone with its demo: dmd A.d And I can use this to not compile the demo of A and compile the demo section of B: dmd A.d B.d This is just the basic idea, and perhaps people suggested something better than this. Bye, bearophile
May 01 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 07:32:43 UTC, bearophile wrote:
 This is just the basic idea, and perhaps people suggested 
 something better than this.

 Bye,
 bearophile
Yeah I sometimes have commented out main() for that purpose. Sounds like a useful generic addition.
May 01 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/1/2014 12:32 AM, bearophile wrote:
 This is just the basic idea, and perhaps people suggested something better than
 this.
You've already got it working with version, that's what version is for. Why add yet another way to do it?
May 01 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 You've already got it working with version, that's what version 
 is for. Why add yet another way to do it?
Because I'd like something better. It's an idiom that I have used many times (around 15-20 times). I'd like the compiler (or build tool) to avoid me to specify two times what the main module is. Also, the current way to do it, in those modules I have to specify the module name two times (once at the top and once at the bottom, unless I use some compile-time syntheses of the version identifier from the current module name). Bye, bearophile
May 02 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/2/2014 4:02 AM, bearophile wrote:
 Walter Bright:

 You've already got it working with version, that's what version is for. Why
 add yet another way to do it?
Because I'd like something better. It's an idiom that I have used many times (around 15-20 times). I'd like the compiler (or build tool) to avoid me to specify two times what the main module is. Also, the current way to do it, in those modules I have to specify the module name two times (once at the top and once at the bottom, unless I use some compile-time syntheses of the version identifier from the current module name).
D has so many language features, we need a higher bar for adding new ones, especially ones that can be done straightforwardly with existing features.
May 04 2014
next sibling parent "Messenger" <dont shoot.me> writes:
On Monday, 5 May 2014 at 00:40:41 UTC, Walter Bright wrote:
 D has so many language features, we need a higher bar for 
 adding new ones, especially ones that can be done 
 straightforwardly with existing features.
Sure, but you'll have to agree that there comes a point where library solutions end up being so syntactically convoluted that it becomes difficult to visually parse. Bad-practice non-sensical example. version(ParallelUnittests) const const(TypeTuple!(string, "name", UnittestImpl!SomeT, "test", bool, "result")) testTempfileHammering(string name, alias fun, SomeT, Args...)(Args args) pure safe (TestSuite.standard) if ((name.length > 0) && __traits(parallelizable, fun) && !is(Args) && (Args.length > 2) && allSatisfy!(isSomeString, args)) { /* ... */ }
May 05 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 D has so many language features, we need a higher bar for 
 adding new ones, especially ones that can be done 
 straightforwardly with existing features.
If I am not wrong, all this is needed here is a boolean compile-time flag, like "__is_main_module". I think this is a small enough feature and gives enough back that saves time, to deserve to be a built-in feature. I have needed this for four or five years and the need/desire isn't going away. Bye, bearophile
May 05 2014
parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 05 May 2014 10:00:54 +0000
bearophile via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Walter Bright:

 D has so many language features, we need a higher bar for
 adding new ones, especially ones that can be done
 straightforwardly with existing features.
If I am not wrong, all this is needed here is a boolean compile-time flag, like "__is_main_module". I think this is a small enough feature and gives enough back that saves time, to deserve to be a built-in feature. I have needed this for four or five years and the need/desire isn't going away.
As far as I can tell, adding a feature wouldn't add much over simply using a version block for defining your demos. Just because something is done in python does not mean that it is appropriate for D or that it requires adding features to D in order to support it. Though I confess that I'm biased against it, because not only have I never needed the feature that you're looking for, but I'd actually consider it bad practice to organize code that way. It makes no sense to me to make it so that any arbitrary module can be the main module for the program. Such code should be kept separate IMHO. And I suspect that most folks who either haven't done much with python and/or who don't particularly like python would agree with me. Maybe even many of those who use python would; I don't know. Regardless, I'd strongly argue that this is a case where using user-defined versions is the obvious answer. It may not give you what you want, but it gives you want you need in order to make it so that a module has a main that's compiled in only when you want it to be. And D is already quite complicated. New features need to pass a high bar, and adding a feature just so that something is built-in rather than using an existing feature which solves the problem fairly simply definitely does not pass that bar IMHO. I'm completely with Walter on this one. - Jonathan M Davis
May 05 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Jonathan M Davis:

 Just because something is done in
 python does not mean that it is appropriate for D or that it 
 requires adding features to D in order to support it.
I agree. On the other hand now I have years of experience in both language and I still have this need in D.
 It makes no sense to me to make it so that
 any arbitrary module can be the main module for the program.
This feature is mostly for single modules, that you can download from archives, the web, etc. So it's for library code contained in single modules. In Python code is usually short, so in a single module you can implement many data structures, data visualization, data converters, etc. So it's quite handy for such modules to have demo, or even an interactive demo. Or they can be used with command lines arguments (with getopt), like a sound file converter. And then you can also import this module from other modules to perform the same operation (like sound file conversion) from your code. So you can use it both as a program that does something, and as a module for a larger system.
 Such code should be kept separate IMHO.
This means that you now have two modules, so to download them atomically you need some kind of packaging, like a zip. If your project is composed by many modules this is not a problem. But if you have a single module project (and this happens often in Python), going from 1 to 2 files is not nice. I have written tens of reusable D modules, and some of them have a demo or are usable stand-alone when you have simpler needs.
 Maybe even many of those who use python would; I don't know.
In Python is a very commonly used idiom. And there is not much in D that makes the same idiom less useful :-) Bye, bearophile
May 05 2014
parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 05 May 2014 11:26:29 +0000
bearophile via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Jonathan M Davis:
 Such code should be kept separate IMHO.
This means that you now have two modules, so to download them atomically you need some kind of packaging, like a zip. If your project is composed by many modules this is not a problem. But if you have a single module project (and this happens often in Python), going from 1 to 2 files is not nice. I have written tens of reusable D modules, and some of them have a demo or are usable stand-alone when you have simpler needs.
Honestly, I wouldn't even consider distributing something that was only a single module in size unless it were on the scale of std.datetime, which we've generally agreed is too large for a single module. So, a single module wouldn't have enough functionality to be worth distributing. And even if I were to distribute such a module, I'd let its documentation speak for itself and otherwise just expect the programmer to read the code. Regardless, the version specifier makes it easy to have a version where main is defined for demos or whatever else you might want to do with it. So, I'd suggest just using that. I highly doubt that you'd be able to talk either Walter or Andrei into supporting a separate feature for this. At this point, we're trying to use what we already have to implement new things rather than adding new features to the language, no matter how minor it might seem. New language features are likely be restricted to things where we really need them to be language features. And this doesn't fit that bill. - Jonathan M Davis
May 05 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Jonathan M Davis:

 Honestly, I wouldn't even consider distributing something that 
 was only a
 single module in size unless it were on the scale of 
 std.datetime, which we've
 generally agreed is too large for a single module.
 So, a single module
 wouldn't have enough functionality to be worth distributing.
This reasoning style is similar to the Groucho Marx quote: "I don't care to belong to any club that will have me as a member" In the Python world online you can find thousands of single-module projects (few of them are mine). I have plenty of single D modules that encapsulate a single functionality. In Haskell cabal you can find many single modules that add functionality (plus larger projects like Diagrams). And I think D has to strongly encourage the creation of such ecosystem of modules that you download and use in your programs. You can't have everything in the standard library, it's not wise to re-write them (like 2D vectors, I have already seen them implemented ten different times in the D world), and there are plenty of useful things that can be contained in single modules, especially if such modules can import the functionality of one or more other modules.
 And even if I were to distribute such a module, I'd let its
 documentation speak for itself
 and otherwise just expect the programmer to read the code.
A demo and the documentation are both useful. And the documentation can't replace stand-alone functionality.
 Regardless, the version specifier makes it easy to have a 
 version where main is defined for demos or whatever else
 you might want to do with it.
Bye, bearophile
May 05 2014
parent reply "Meta" <jared771 gmail.com> writes:
However, the community is starting to standardize around Dub as 
the standard package manager. Dub makes downloading a package as 
easy as editing a JSON file (and it scales such that you can 
download a project of any size this way). Did Python have a 
proper package manager before this idiom arose?
May 05 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
Meta:

 However, the community is starting to standardize around Dub as 
 the standard package manager. Dub makes downloading a package 
 as easy as editing a JSON file (and it scales such that you can 
 download a project of any size this way).
Having package manager(s) in Python doesn't make single module Python projects less popular or less appreciated. Most Python projects are very small, thanks to both standard library and code succinctness (that allows a small program to do a lot), and the presence of an healthy ecology of third party modules that you can import to avoid re-writing things already done by other people. All this should become more common in the D world :-)
 Did Python have a proper package manager before this idiom 
 arose?
Both are very old, and I am not sure, but I think the main module idiom is precedent. Bye, bearophile
May 05 2014
prev sibling next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 30 Apr 2014 08:43:31 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 However, this is too 
 coarse-grained - it would be great if each unittest could be pooled 
 across the thread pool. That's more difficult to implement.
I filed a pull request which allowed running unit tests individually (and in different threads*) two years ago, but didn't pursue this further: https://github.com/D-Programming-Language/dmd/pull/1131 https://github.com/D-Programming-Language/druntime/pull/308 To summarize: It provides a function pointer for every unit test to druntime or user code. This is actually easy to do. Naming tests requires changes in the parser, but I guess that shouldn't be difficult either. * Some time ago there was a discussion whether unit tests can rely on other tests being executed first / execution order. AFAIK some phobos tests require this. That of course won't work if you run the tests in different threads.
Apr 30 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 8:54 AM, Johannes Pfau wrote:
 Am Wed, 30 Apr 2014 08:43:31 -0700
 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 However, this is too
 coarse-grained - it would be great if each unittest could be pooled
 across the thread pool. That's more difficult to implement.
I filed a pull request which allowed running unit tests individually (and in different threads*) two years ago, but didn't pursue this further: https://github.com/D-Programming-Language/dmd/pull/1131 https://github.com/D-Programming-Language/druntime/pull/308 To summarize: It provides a function pointer for every unit test to druntime or user code. This is actually easy to do. Naming tests requires changes in the parser, but I guess that shouldn't be difficult either.
That's fantastic, would you be willing to reconsider that work?
 * Some time ago there was a discussion whether unit tests can rely on
    other tests being executed first / execution order. AFAIK some phobos
    tests require this. That of course won't work if you run the tests in
    different threads.
I think indeed a small number of unittests rely on order of execution. Those will be still runnable with a fork factor of 1. We'd need a way to specify that - either a flag or: static shared this() { Runtime.unittestThreads = 1; } Andrei
Apr 30 2014
next sibling parent reply Byron <byron.heads gmail.com> writes:
On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:

 
 I think indeed a small number of unittests rely on order of execution.
 Those will be still runnable with a fork factor of 1. We'd need a way to
 specify that - either a flag or:
 
 static shared this() { Runtime.unittestThreads = 1; }
 
 
 Andrei
Named tested seems like a no brainier to me. Maybe nested unittests? unittest OrderTests { // setup for all child tests? unittest a { } unittest b { } } I also wonder if its just better to extend/expose the unittest API for more advanced things like order of execution, test reporting, and parallel execution. And we can just support an external unittesting library to do all the advanced testing options.
Apr 30 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 9:19 AM, Byron wrote:
 On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:

 I think indeed a small number of unittests rely on order of execution.
 Those will be still runnable with a fork factor of 1. We'd need a way to
 specify that - either a flag or:

 static shared this() { Runtime.unittestThreads = 1; }


 Andrei
Named tested seems like a no brainier to me. Maybe nested unittests? unittest OrderTests { // setup for all child tests? unittest a { } unittest b { } }
I wouldn't want to get too excited about stuff without there being a need for it. We risk overcomplicating things (i.e what happens inside loops etc).
 I also wonder if its just better to extend/expose the unittest API for
 more advanced things like order of execution, test reporting, and parallel
 execution. And we can just support an external unittesting library to do
 all the advanced testing options.
That would be pretty rad. Andrei
Apr 30 2014
parent Johannes Pfau <nospam example.com> writes:
Am Wed, 30 Apr 2014 09:28:18 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 I also wonder if its just better to extend/expose the unittest API
 for more advanced things like order of execution, test reporting,
 and parallel execution. And we can just support an external
 unittesting library to do all the advanced testing options.  
That would be pretty rad.
We can kinda do that. I guess the main problem for a simple approach is that unittests are functions but advanced frameworks often have unittest classes/objects. We can't really emulate that on top of functions. What we can easily do is parse UDAs on unittests and provide access to these UDAs. For example: ------------ module my.testlib; struct Author { string _name; string serialize() {return _name;} //Must be evaluated in CTFE } ------------ module test; import my.testlib; Author("The Author") unittest { //Code goes here } ------------ Then with the mentioned pull request we just add another field to the runtime unittest information struct: An associative array with string keys matching the qualified name of the UDA and as values the strings returned by serialize() (evaluated by CTFE). Then we have for the test runner: ------------ foreach( m; ModuleInfo ) { foreach(test; m.unitTests) { if("my.testlib.Author" in test.uda) writefln("Author: %s", test.uda["my.testlib.Author"]); } } ------------ This is some more work to implement though, but it's additive so we can stuff later.
Apr 30 2014
prev sibling next sibling parent Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 18:19, Byron a écrit :
 On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:

 I think indeed a small number of unittests rely on order of execution.
 Those will be still runnable with a fork factor of 1. We'd need a way to
 specify that - either a flag or:

 static shared this() { Runtime.unittestThreads = 1; }


 Andrei
Named tested seems like a no brainier to me. Maybe nested unittests? unittest OrderTests { // setup for all child tests? unittest a { } unittest b { } } I also wonder if its just better to extend/expose the unittest API for more advanced things like order of execution, test reporting, and parallel execution. And we can just support an external unittesting library to do all the advanced testing options.
I don't see the usage? I'll find nice enough if IDEs will be able to put unittest in a tree and using the module's names for the hierarchy.
Apr 30 2014
prev sibling parent reply "Jason Spencer" <j8spencer gmail.com> writes:
On Wednesday, 30 April 2014 at 16:19:48 UTC, Byron wrote:
 On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:

 
 I think indeed a small number of unittests rely on order of 
 execution.
Maybe nested unittests? unittest OrderTests { // setup for all child tests? unittest a { } unittest b { } }
I like my unit tests to be next to the element under test, and it seems like this nesting would impose some limits on that. Another idea might be to use the level of the unit as an indicator of order dependencies. If UTs for B call/depend on A, then we would assign A to level 0, run it's UTs first, and assign B to level 1. All 0's run before all 1's. Could we use a template arg on the UT to indicate level? unittest!(0) UtA { // test A} unittest!{1} UtB { // test B} Or maybe some fancier compiler dependency analysis?
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 10:41 AM, Jason Spencer wrote:
 On Wednesday, 30 April 2014 at 16:19:48 UTC, Byron wrote:
 On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:

 I think indeed a small number of unittests rely on order of execution.
Maybe nested unittests? unittest OrderTests { // setup for all child tests? unittest a { } unittest b { } }
I like my unit tests to be next to the element under test, and it seems like this nesting would impose some limits on that. Another idea might be to use the level of the unit as an indicator of order dependencies. If UTs for B call/depend on A, then we would assign A to level 0, run it's UTs first, and assign B to level 1. All 0's run before all 1's. Could we use a template arg on the UT to indicate level? unittest!(0) UtA { // test A} unittest!{1} UtB { // test B} Or maybe some fancier compiler dependency analysis?
Well how complicated can we make it all? -- Andrei
May 01 2014
parent reply "Jason Spencer" <j8spencer gmail.com> writes:
On Thursday, 1 May 2014 at 17:57:05 UTC, Andrei Alexandrescu 
wrote:
 Well how complicated can we make it all? -- Andrei
As simple as possible, but no simpler :) I've seen you favor this or that feature because it would make unit testing easier and more accessible, and eschew features that would cause folks to not bother. In truth, we could leave it how it is. But I surmise you started this thread to improve the feature and encourage more use of unit test. So we're looking for the sweet spot. I don't think it's important to support the sharing of state between unit tests. But I do see value in being able to influence the order of test execution, largely for debugging reasons. It's important for a module's tests to be able to depend on other modules--otherwise, unittest is not very enticing. If it does and there's a failure, it's hugely helpful to know the failure is caused by the unit-under-test, and not the dependency(s). The common way to do that is to run the tests in reverse order of dependency--i.e. levelize the design and test from the bottom up. See "Large Scale C++ SW Design", Lakos, Chp. 3-4. I imagine there are other niche reasons for order, but for me, this is the driving reason. So it seems a nice middle ground. If order is important, it might be a workable approach to run unittest in the reverse module dependency order by default. A careful programmer could arrange those classes/functions in modules to take advantage of that order if it were important. Seems like we'd have the dependency information--building and traversing a tree shouldn't be that tough.... To preserve it, you'd only be able to parallelize the UTs within a module at a time (unless there's a different flag or something.) But it seems the key question is whether order can EVER be important for any reason. I for one would be willing to give up parallelization to get levelized tests. What are you seeing on your project? How do you allow tests to have dependencies and avoid order issues? Why is parallelization more important than that?
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 2:28 PM, Jason Spencer wrote:
 On Thursday, 1 May 2014 at 17:57:05 UTC, Andrei Alexandrescu wrote:
 Well how complicated can we make it all? -- Andrei
As simple as possible, but no simpler :) I've seen you favor this or that feature because it would make unit testing easier and more accessible, and eschew features that would cause folks to not bother. In truth, we could leave it how it is. But I surmise you started this thread to improve the feature and encourage more use of unit test. So we're looking for the sweet spot. I don't think it's important to support the sharing of state between unit tests. But I do see value in being able to influence the order of test execution, largely for debugging reasons. It's important for a module's tests to be able to depend on other modules--otherwise, unittest is not very enticing. If it does and there's a failure, it's hugely helpful to know the failure is caused by the unit-under-test, and not the dependency(s). The common way to do that is to run the tests in reverse order of dependency--i.e. levelize the design and test from the bottom up. See "Large Scale C++ SW Design", Lakos, Chp. 3-4. I imagine there are other niche reasons for order, but for me, this is the driving reason. So it seems a nice middle ground. If order is important, it might be a workable approach to run unittest in the reverse module dependency order by default. A careful programmer could arrange those classes/functions in modules to take advantage of that order if it were important. Seems like we'd have the dependency information--building and traversing a tree shouldn't be that tough.... To preserve it, you'd only be able to parallelize the UTs within a module at a time (unless there's a different flag or something.) But it seems the key question is whether order can EVER be important for any reason. I for one would be willing to give up parallelization to get levelized tests. What are you seeing on your project? How do you allow tests to have dependencies and avoid order issues? Why is parallelization more important than that?
I'll be blunt. What you say is technically sound (which is probably why you believe it is notable) but seems to me an unnecessarily complex engineering contraption that in all likelihood has more misuses than good uses. I fully understand you may think I'm a complete chowderhead for saying this; in the past I've been in your place and others have been in mine, and it took me years to appreciate both positions. -- Andrei
May 01 2014
next sibling parent Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 01 May 2014 14:40:41 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On 5/1/14, 2:28 PM, Jason Spencer wrote:
 But it seems the key question is whether order can EVER be
 important for any reason. I for one would be willing to give up
 parallelization to get levelized tests. What are you seeing on
 your project? How do you allow tests to have dependencies and
 avoid order issues? Why is parallelization more important than
 that?
I'll be blunt. What you say is technically sound (which is probably why you believe it is notable) but seems to me an unnecessarily complex engineering contraption that in all likelihood has more misuses than good uses. I fully understand you may think I'm a complete chowderhead for saying this; in the past I've been in your place and others have been in mine, and it took me years to appreciate both positions. -- Andrei
It's my understanding that given how druntime is put together, it should be possible to override some of its behaviors such that you could control the order in which tests were run (the main thing lacking at this point is the fact that you can currently only control it at module-level granularity) and that that's what existing third party unit test frameworks for D do. So, I would think that we could make it so that the default test runner does things the sensible way that works for most everyone, and then anyone who really wants more control can choose to override the normal test runner to do run the tests the way that they want to. That should be essentially the way that it is now. The main question then is which features we think are sensible for everyone, and I think that based on this discussion, at this point, it's primarily 1. Make it possible for druntime to access unit test functions individually. 2. Make it so that druntime runs unit test functions in parallel unless they're marked as needing to be run in serial (probably with a UDA for that purpose). 3. Make it so that we can name unittest blocks so that stack traces have better function names in them. With those sorted out, we can look at further features like whether we want to be able to run unit tests by name (or whatever other nice features we can come up with), but we might as well start there rather than trying to come up with a comprehensive list of the features that D's unit testing facilities should have (especially since we really should be erring on the side of simple). - Jonathan M Davis
May 01 2014
prev sibling parent reply "Jason Spencer" <j8spencer gmail.com> writes:
On Thursday, 1 May 2014 at 21:40:38 UTC, Andrei Alexandrescu 
wrote:
 I'll be blunt. What you say is technically sound (which is 
 probably why you believe it is notable)...
Well, I suppose that's not the MOST insulting brush-off I could hope for, but it falls short of encouraging me to contribute ideas for the improvement of the language. I'll just add this: I happen to introduce a colleague to the D webpage the other day, and ran across this in the overview: "D ... doesn't come with a VM, a religion, or an overriding philosophy. It's a practical language for practical programmers who need to get the job done quickly, reliably, and leave behind maintainable, easy to understand code." This business that only inherently parallel tests that never access disk, share setup, etc. are TRUE unit tests smack much more of religion than pragmatism. Indeed, phobos demonstrates that sometimes, the practical thing to do is to violate these normally good rules. Another overriding principle of D is that the easy thing to do should be the safe thing to do, and dangerous things should take some work. I don't see that reflected in the proposal to turn parallelism on by default. This seems like a time bomb waiting to go off on unsuspecting acolytes of the cult of inherently-parallel-tests-onlyism. If we don't want to consider how we can accommodate both camps here, then I must at least support Jonathan's modest suggestion that parallel UTs require active engagement rather than being the default.
May 01 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 2 May 2014 at 03:04:39 UTC, Jason Spencer wrote:
 If we don't want to consider how we can accommodate both camps 
 here, then I must at least support Jonathan's modest suggestion 
 that parallel UTs require active engagement rather than being 
 the default.
Use chroot() and fork(). Solves all problems.
May 01 2014
parent reply "w0rp" <devw0rp gmail.com> writes:
On Friday, 2 May 2014 at 04:28:26 UTC, Ola Fosheim Grøstad wrote:
 On Friday, 2 May 2014 at 03:04:39 UTC, Jason Spencer wrote:
 If we don't want to consider how we can accommodate both camps 
 here, then I must at least support Jonathan's modest 
 suggestion that parallel UTs require active engagement rather 
 than being the default.
Use chroot() and fork(). Solves all problems.
You know, executing batches of tests in multiple processes could be a good compromise. You might still run into filesystem issues, but if you run a series of tests with a number of processes at the same time, you can at least guarantee that you won't run into shared memory issues.
May 01 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 2 May 2014 at 06:57:46 UTC, w0rp wrote:
 You know, executing batches of tests in multiple processes 
 could be a good compromise. You might still run into filesystem 
 issues, but if you run a series of tests with a number of 
 processes at the same time, you can at least guarantee that you 
 won't run into shared memory issues.
Using fork() would be good for multi-threaded unit testing or when testing global datastructures (singeltons). I don't get the desire for demanding that unit tests are "pure". That would miss the units that are most likely to blow up in an application. If you fork before opening any files it probably will work out ok.
May 02 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 8:04 PM, Jason Spencer wrote:
 On Thursday, 1 May 2014 at 21:40:38 UTC, Andrei Alexandrescu wrote:
 I'll be blunt. What you say is technically sound (which is probably
 why you believe it is notable)...
Well, I suppose that's not the MOST insulting brush-off I could hope for, but it falls short of encouraging me to contribute ideas for the improvement of the language.
Sorry, and that's great. Thanks! -- Andrei
May 02 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 8:04 PM, Jason Spencer wrote:
 On Thursday, 1 May 2014 at 21:40:38 UTC, Andrei Alexandrescu wrote:
 I'll be blunt. What you say is technically sound (which is probably
 why you believe it is notable)...
Well, I suppose that's not the MOST insulting brush-off I could hope for, but it falls short of encouraging me to contribute ideas for the improvement of the language.
I need to make an amend to this because indeed it's more than 2 std deviations away from niceness: I have a long history of ideas with a poor complexity/usefulness ratio, and I now wish I'd received such a jolt. -- Andrei
May 02 2014
parent "Jason Spencer" <j8spencer gmail.com> writes:
On Friday, 2 May 2014 at 14:59:50 UTC, Andrei Alexandrescu wrote:
 I need to make an amend to this because indeed it's more than 2 
 std deviations away from niceness: I have a long history of 
 ideas with a poor complexity/usefulness ratio, and I now wish 
 I'd received such a jolt. -- Andrei
I appreciate that, and can accept it in the spirit of mentoring and helpfulness. What might work even better for me, though, is to forego the assumption that I need such a jolt or that you are the person, in this forum at least, to provide it and simply address the merits or lack thereof of the suggestion as made. If we can't agree that a method, direct or indirect, to control the order of UTs is appropriate, then we should opt for the status quo. By my reading of this thread, that leaves us with no consensus that UTs MUST be order-independent, but that being able to parallelize is a good thing. It seems we can: 1. leave defaults as they are and make parallelization an option, or 2. make it the language model and allow people to dissent with an option I can agree with Andre that you'd rather have a solid, well-defined language that works in most cases without too many buttons, switches, and levers. I'm just not sure that jives with "easiest is safest" and "don't impose a model, provide a tool." To me, improving the performance of a non-performance-critical aspect does not weigh enough to counterbalance a safety risk and model imposition. How about others? Test names seems pretty much agreed to. I think the idea of making everything available to the druntime would let pretty much anyone do what they need.
May 02 2014
prev sibling next sibling parent Johannes Pfau <nospam example.com> writes:
Am Wed, 30 Apr 2014 09:02:54 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 https://github.com/D-Programming-Language/dmd/pull/1131
 https://github.com/D-Programming-Language/druntime/pull/308

 To summarize: It provides a function pointer for every  unit test to
 druntime or user code. This is actually easy to do. Naming tests
 requires changes in the parser, but I guess that shouldn't be
 difficult either.  
That's fantastic, would you be willing to reconsider that work?
Sure, I'll have a look later today.
Apr 30 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 30 Apr 2014 09:02:54 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 To summarize: It provides a function pointer for every  unit test to
 druntime or user code. This is actually easy to do. Naming tests
 requires changes in the parser, but I guess that shouldn't be
 difficult either.  
That's fantastic, would you be willing to reconsider that work?
Are you still interested in this? I guess we could build a std.test phobos module to completely replace the current unittest implementation, see: http://forum.dlang.org/post/ljtbch$lg6$1 digitalmars.com This seems to be a better solution, especially considering extension possibilities.
May 01 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 4:35 AM, Johannes Pfau wrote:
 Am Wed, 30 Apr 2014 09:02:54 -0700
 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 To summarize: It provides a function pointer for every  unit test to
 druntime or user code. This is actually easy to do. Naming tests
 requires changes in the parser, but I guess that shouldn't be
 difficult either.
That's fantastic, would you be willing to reconsider that work?
Are you still interested in this?
Yes.
 I guess we could build a std.test phobos module to completely replace
 the current unittest implementation, see:
 http://forum.dlang.org/post/ljtbch$lg6$1 digitalmars.com

 This seems to be a better solution, especially considering extension
 possibilities.
I think anything that needs more than the user writing unittests and adding a line somewhere would be suboptimal. Basically we need all we have now, just run in parallel. Andrei
May 01 2014
prev sibling next sibling parent reply "QAston" <qaston gmail.com> writes:
On Wednesday, 30 April 2014 at 15:43:35 UTC, Andrei Alexandrescu 
wrote:
 Hello,


 A coworker mentioned the idea that unittests could be run in 
 parallel (using e.g. a thread pool). I've rigged things to run 
 in parallel unittests across modules, and that works well. 
 However, this is too coarse-grained - it would be great if each 
 unittest could be pooled across the thread pool. That's more 
 difficult to implement.

 This brings up the issue of naming unittests. It's becoming 
 increasingly obvious that anonymous unittests don't quite scale 
 - coworkers are increasingly talking about "the unittest at 
 line 2035 is failing" and such. With unittests executing in 
 multiple threads and issuing e.g. logging output, this is only 
 likely to become more exacerbated. We've resisted named 
 unittests but I think there's enough evidence to make the 
 change.

 Last but not least, virtually nobody I know runs unittests and 
 then main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
    ...
 }

 I think it's time to change that. We could do it the 
 non-backward-compatible way by redefining -unittest to instruct 
 the compiler to not run main. Or we could define another flag 
 such as -unittest-only and then deprecate the existing one.

 Thoughts? Would anyone want to work on such stuff?


 Andrei
An existing library implementation: https://github.com/atilaneves/unit-threaded
Apr 30 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 9:24 AM, QAston wrote:
 An existing library implementation:
 https://github.com/atilaneves/unit-threaded
Nice! The "Warning: With dmd 2.064.2 and the gold linker on Linux 64-bit this code crashes." is hardly motivating though :o). I think this project is a confluence of a couple others, such as logging and a collection of specialized assertions. But it's hard to tell without documentation, and the linked output https://github.com/atilaneves/unit-threaded/blob/master/unit_threaded/io.d does not exist. Andrei
Apr 30 2014
parent "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 30 April 2014 at 16:32:19 UTC, Andrei Alexandrescu 
wrote:
 On 4/30/14, 9:24 AM, QAston wrote:
 An existing library implementation:
 https://github.com/atilaneves/unit-threaded
Nice! The "Warning: With dmd 2.064.2 and the gold linker on Linux 64-bit this code crashes." is hardly motivating though :o). I think this project is a confluence of a couple others, such as logging and a collection of specialized assertions. But it's hard to tell without documentation, and the linked output https://github.com/atilaneves/unit-threaded/blob/master/unit_threaded/io.d does not exist. Andrei
I'm thinking of removing the warning but I have no idea how many people are using dmd 2.064.2, and it does crash if used with ld.gold. It was a dmd bug that got fixed (I voted on it). I fixed the Markdown link. That's what happens when I move code around! If you want to see what the output is like you can check out https://travis-ci.org/atilaneves/cerealed or git clone it and run dub test. I think seeing failing output is just as interesting as well, so there's a failing example in there that can be executed. The README.md says how. When I first wrote this I tried using it on Phobos to see if I could run the unit tests a lot faster but didn't have a lot of luck. I think I ran out of memory trying to reflect on its modules but I can't remember. I should try that again. Atila
Apr 30 2014
prev sibling parent "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 30 April 2014 at 16:24:16 UTC, QAston wrote:
 On Wednesday, 30 April 2014 at 15:43:35 UTC, Andrei 
 Alexandrescu wrote:
 Hello,


 A coworker mentioned the idea that unittests could be run in 
 parallel (using e.g. a thread pool). I've rigged things to run 
 in parallel unittests across modules, and that works well. 
 However, this is too coarse-grained - it would be great if 
 each unittest could be pooled across the thread pool. That's 
 more difficult to implement.

 This brings up the issue of naming unittests. It's becoming 
 increasingly obvious that anonymous unittests don't quite 
 scale - coworkers are increasingly talking about "the unittest 
 at line 2035 is failing" and such. With unittests executing in 
 multiple threads and issuing e.g. logging output, this is only 
 likely to become more exacerbated. We've resisted named 
 unittests but I think there's enough evidence to make the 
 change.

 Last but not least, virtually nobody I know runs unittests and 
 then main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
   ...
 }

 I think it's time to change that. We could do it the 
 non-backward-compatible way by redefining -unittest to 
 instruct the compiler to not run main. Or we could define 
 another flag such as -unittest-only and then deprecate the 
 existing one.

 Thoughts? Would anyone want to work on such stuff?


 Andrei
An existing library implementation: https://github.com/atilaneves/unit-threaded
Beat me to it! :P The concurrency and naming aspects are exactly what drove me to write unit-threaded to begin with. I probably wouldn't have bothered if D already had the functionality I wanted. Atila
Apr 30 2014
prev sibling next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Apr 30, 2014 at 08:43:31AM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
[...]
 Last but not least, virtually nobody I know runs unittests and then
 main.  This is quickly becoming an idiom:
 
 version(unittest) void main() {}
 else void main()
 {
    ...
 }
 
 I think it's time to change that. We could do it the
 non-backward-compatible way by redefining -unittest to instruct the
 compiler to not run main. Or we could define another flag such as
 -unittest-only and then deprecate the existing one.
[...] Actually, I still run unittests before main. :) When I want to *not* run unittests, I just recompile with -release (and no -unittest). The nice thing about unittests running before main is that during the code-compile-test cycle I can have the unittests run *and* manually test the program afterwards -- usually in this case I only run the program once before modifying the code and recompiling, so it would be needless work to have to compile the program twice (once for unittests, once for main). An alternative, perhaps nicer, idea is to have a *runtime* switch to run unittests, recognized by druntime, perhaps something like: ./program --pragma-druntime-run-unittests Or something similarly unlikely to clash with real options accepted by the program. T -- Genius may have its limitations, but stupidity is not thus handicapped. -- Elbert Hubbard
Apr 30 2014
prev sibling next sibling parent Orvid King via Digitalmars-d <digitalmars-d puremagic.com> writes:
As a note, I'm one of those that have used the main function in
addition to unittests. I use it in the unittest build mode of my JSON
serialization library, using the unittests to ensure I didn't break
anything, and then using the main to run a performance test that my
changes actually did make it faster.

On 4/30/14, H. S. Teoh via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 On Wed, Apr 30, 2014 at 08:43:31AM -0700, Andrei Alexandrescu via
 Digitalmars-d wrote:
 [...]
 Last but not least, virtually nobody I know runs unittests and then
 main.  This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
    ...
 }

 I think it's time to change that. We could do it the
 non-backward-compatible way by redefining -unittest to instruct the
 compiler to not run main. Or we could define another flag such as
 -unittest-only and then deprecate the existing one.
[...] Actually, I still run unittests before main. :) When I want to *not* run unittests, I just recompile with -release (and no -unittest). The nice thing about unittests running before main is that during the code-compile-test cycle I can have the unittests run *and* manually test the program afterwards -- usually in this case I only run the program once before modifying the code and recompiling, so it would be needless work to have to compile the program twice (once for unittests, once for main). An alternative, perhaps nicer, idea is to have a *runtime* switch to run unittests, recognized by druntime, perhaps something like: ./program --pragma-druntime-run-unittests Or something similarly unlikely to clash with real options accepted by the program. T -- Genius may have its limitations, but stupidity is not thus handicapped. -- Elbert Hubbard
Apr 30 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
I believe only missing step right now is propagation of UDA's to 
RTInfo when demanded. Everything else can be done as Phobos 
solution.

And if requirement to have all modules transitively accessible 
from root one is acceptable it can be already done with 
http://dlang.org/traits.html#getUnitTests

Simplicity of D unit tests is their best feature.
Apr 30 2014
next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 30 April 2014 at 17:30:30 UTC, Dicebot wrote:
 I believe only missing step right now is propagation of UDA's 
 to RTInfo when demanded. Everything else can be done as Phobos 
 solution.

 And if requirement to have all modules transitively accessible 
 from root one is acceptable it can be already done with 
 http://dlang.org/traits.html#getUnitTests

 Simplicity of D unit tests is their best feature.
IMHO this best feature is only useful when writing a small script-like program. The hassle of using anything more heavy-duty is likely to make one not want to write tests. The unittest blocks are simple, and that's good. But for me I wouldn't (and haven't) use them for "real work". When tests pass, it doesn't really matter if they were written with only using assert or what the output was like or any of those things. But when they fail, I want to: . Run the failing test(s) in isolation, selecting them on the command-line by name . Have tests grouped in categories (I use packages) to run similar tests together . Be able to enable debug output that is normally supressed . Know the name of the test to know which one is failing . Have meaningful output from the failure without having to construct said meaningful output myself (assertEquals vs assert) I don't know about anyone else, but I make my tests fail a lot. I also added threading, hidden tests, and tests expected to fail to that list but they are nice-to-have features. I can't do without the rest though. Also, I like pretty colours in the output for failure and success, but that might be just me :P Atila
Apr 30 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 30 April 2014 at 18:04:43 UTC, Atila Neves wrote:
 On Wednesday, 30 April 2014 at 17:30:30 UTC, Dicebot wrote:
 I believe only missing step right now is propagation of UDA's 
 to RTInfo when demanded. Everything else can be done as Phobos 
 solution.

 And if requirement to have all modules transitively accessible 
 from root one is acceptable it can be already done with 
 http://dlang.org/traits.html#getUnitTests

 Simplicity of D unit tests is their best feature.
IMHO this best feature is only useful when writing a small script-like program. The hassle of using anything more heavy-duty is likely to make one not want to write tests. The unittest blocks are simple, and that's good. But for me I wouldn't (and haven't) use them for "real work". When tests pass, it doesn't really matter if they were written with only using assert or what the output was like or any of those things. But when they fail, I want to: . Run the failing test(s) in isolation, selecting them on the command-line by name . Have tests grouped in categories (I use packages) to run similar tests together . Be able to enable debug output that is normally supressed . Know the name of the test to know which one is failing . Have meaningful output from the failure without having to construct said meaningful output myself (assertEquals vs assert) I don't know about anyone else, but I make my tests fail a lot.
I think this is key difference. For me failing unit test is always exceptional situation. And if test group is complex enough to require categorization then either my code is not procedural enough or module is just too big and needs to be split. There are of course always some tests with complicated environment and/or I/O. But those are never unit tests and thus part of completely different framework.
Apr 30 2014
parent reply =?UTF-8?B?IsOBdGlsYQ==?= Neves " <atila.neves gmail.com> writes:
On Wednesday, 30 April 2014 at 19:20:20 UTC, Dicebot wrote:
 On Wednesday, 30 April 2014 at 18:04:43 UTC, Atila Neves wrote:
 On Wednesday, 30 April 2014 at 17:30:30 UTC, Dicebot wrote:
 I believe only missing step right now is propagation of UDA's 
 to RTInfo when demanded. Everything else can be done as 
 Phobos solution.

 And if requirement to have all modules transitively 
 accessible from root one is acceptable it can be already done 
 with http://dlang.org/traits.html#getUnitTests

 Simplicity of D unit tests is their best feature.
IMHO this best feature is only useful when writing a small script-like program. The hassle of using anything more heavy-duty is likely to make one not want to write tests. The unittest blocks are simple, and that's good. But for me I wouldn't (and haven't) use them for "real work". When tests pass, it doesn't really matter if they were written with only using assert or what the output was like or any of those things. But when they fail, I want to: . Run the failing test(s) in isolation, selecting them on the command-line by name . Have tests grouped in categories (I use packages) to run similar tests together . Be able to enable debug output that is normally supressed . Know the name of the test to know which one is failing . Have meaningful output from the failure without having to construct said meaningful output myself (assertEquals vs assert) I don't know about anyone else, but I make my tests fail a lot.
I think this is key difference. For me failing unit test is always exceptional situation.
I TDD a lot. Tests failing are normal. Not only that, I refactor a lot as well. Which causes tests to fail. Fortunately I have tests failing to tell me I screwed up. Even if failing tests were exceptional, I still want everything I just mentioned.
And if test group is complex
 enough to require categorization then either my code is not 
 procedural enough or module is just too big and needs to be 
 split.
And when I split them I put them into a subcategory.
Apr 30 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 30 April 2014 at 21:09:51 UTC, Átila Neves wrote:
 I don't know about anyone else, but I make my tests fail a 
 lot.
I think this is key difference. For me failing unit test is always exceptional situation.
I TDD a lot. Tests failing are normal. Not only that, I refactor a lot as well. Which causes tests to fail. Fortunately I have tests failing to tell me I screwed up.
I dream of a day when TDD crap will be finally discarded and fade into oblivion.
 Even if failing tests were exceptional, I still want everything 
 I just mentioned.
Probably. But will you still _need_ it? ;)
And if test group is complex
 enough to require categorization then either my code is not 
 procedural enough or module is just too big and needs to be 
 split.
And when I split them I put them into a subcategory.
This is somewhat redundant as they are already categorized by module / package.
May 01 2014
parent "Atila Neves" <atila.neves gmail.com> writes:
 I dream of a day when TDD crap will be finally discarded and 
 fade into oblivion.
I think most people who don't like TDD don't fully understand it. At the same time I think people who like TDD tend to abuse it. Either way, I like it, do it, and want my workflow to be the best it is with it.
 Even if failing tests were exceptional, I still want 
 everything I just mentioned.
Probably. But will you still _need_ it? ;)
Yes. :P
And if test group is complex
 enough to require categorization then either my code is not 
 procedural enough or module is just too big and needs to be 
 split.
And when I split them I put them into a subcategory.
This is somewhat redundant as they are already categorized by module / package.
Which is why my library uses the fully qualified name of the test to categorise them. Atila
May 01 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-30 19:30, Dicebot wrote:
 I believe only missing step right now is propagation of UDA's to RTInfo
 when demanded. Everything else can be done as Phobos solution.
I don't see why this is necessary for this case. -- /Jacob Carlborg
Apr 30 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 30 April 2014 at 20:20:26 UTC, Jacob Carlborg wrote:
 On 2014-04-30 19:30, Dicebot wrote:
 I believe only missing step right now is propagation of UDA's 
 to RTInfo
 when demanded. Everything else can be done as Phobos solution.
I don't see why this is necessary for this case.
It is not strictly necessary but you can't reliably get all unit test blocks during compile-time (must be transitively imported) and current run-time reflection for tests is missing any data but actual function pointers. I am personally perfectly satisfied with "single root module imports all" approach but it is likely to be complained if proposed as "standard" way.
May 01 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-05-01 09:10, Dicebot wrote:

 It is not strictly necessary but you can't reliably get all unit test
 blocks during compile-time (must be transitively imported) and current
 run-time reflection for tests is missing any data but actual function
 pointers. I am personally perfectly satisfied with "single root module
 imports all" approach but it is likely to be complained if proposed as
 "standard" way.
The current runner wouldn't work. It needs to be replace with one that uses __traits(getUnitTests). -- /Jacob Carlborg
May 01 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-30 17:43, Andrei Alexandrescu wrote:
 Hello,


 A coworker mentioned the idea that unittests could be run in parallel
 (using e.g. a thread pool). I've rigged things to run in parallel
 unittests across modules, and that works well. However, this is too
 coarse-grained - it would be great if each unittest could be pooled
 across the thread pool. That's more difficult to implement.
Can't we just collect all unit tests with __traits(getUnitTests) and put them through std.parallelism: foreach (unitTest ; unitTests.parallel) unitTest();
 This brings up the issue of naming unittests. It's becoming increasingly
 obvious that anonymous unittests don't quite scale - coworkers are
 increasingly talking about "the unittest at line 2035 is failing" and
 such. With unittests executing in multiple threads and issuing e.g.
 logging output, this is only likely to become more exacerbated. We've
 resisted named unittests but I think there's enough evidence to make the
 change.
Named unit tests are already possible with the help of UDA's: name("foo bar") unittest { assert(true); } I've tried several times here, in reviews, to get people to add some description to the unit tests. But so far no one has agreed. I'm using something quite similar to RSpec from the Ruby world: describe! "toMsec" in { it! "returns the time in milliseconds" in { assert(true); } } This uses the old syntax, with UDA's it becomes something like this: describe("toMsec") { it("returns the time in milliseconds") unittest { assert(true); } }
 Last but not least, virtually nobody I know runs unittests and then
 main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
     ...
 }
Or "dmd -unittest -main -run foo.d"
 I think it's time to change that. We could do it the
 non-backward-compatible way by redefining -unittest to instruct the
 compiler to not run main. Or we could define another flag such as
 -unittest-only and then deprecate the existing one.
Fine by me, I don't like that "main" is run after the unit tests.
 Thoughts? Would anyone want to work on such stuff?
Are you thinking of built-in support or an external library? -- /Jacob Carlborg
Apr 30 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/30/14, 1:19 PM, Jacob Carlborg wrote:
 On 2014-04-30 17:43, Andrei Alexandrescu wrote:
 Hello,


 A coworker mentioned the idea that unittests could be run in parallel
 (using e.g. a thread pool). I've rigged things to run in parallel
 unittests across modules, and that works well. However, this is too
 coarse-grained - it would be great if each unittest could be pooled
 across the thread pool. That's more difficult to implement.
Can't we just collect all unit tests with __traits(getUnitTests) and put them through std.parallelism: foreach (unitTest ; unitTests.parallel) unitTest();
I didn't know of that trait; I adapted code from druntime/src/test_runner.d.
 Named unit tests are already possible with the help of UDA's:

  name("foo bar") unittest
 {
      assert(true);
 }

 I've tried several times here, in reviews, to get people to add some
 description to the unit tests. But so far no one has agreed.
Yah I think that's possible but I'd like the name to be part of the function name as well e.g. unittest__%s.
 I'm using something quite similar to RSpec from the Ruby world:

 describe! "toMsec" in {
      it! "returns the time in milliseconds" in {
          assert(true);
      }
 }

 This uses the old syntax, with UDA's it becomes something like this:

  describe("toMsec")
 {
       it("returns the time in milliseconds") unittest
      {
          assert(true);
      }
 }
That looks... interesting.
 Thoughts? Would anyone want to work on such stuff?
Are you thinking of built-in support or an external library?
Built in with possible help from druntime and/or std. Andrei
Apr 30 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-04-30 22:41, Andrei Alexandrescu wrote:

 Yah I think that's possible but I'd like the name to be part of the
 function name as well e.g. unittest__%s.
Why is that necessary? To have the correct symbol name when debugging?
 I'm using something quite similar to RSpec from the Ruby world:

 describe! "toMsec" in {
      it! "returns the time in milliseconds" in {
          assert(true);
      }
 }

 This uses the old syntax, with UDA's it becomes something like this:

  describe("toMsec")
 {
       it("returns the time in milliseconds") unittest
      {
          assert(true);
      }
 }
That looks... interesting.
The Ruby syntax looks like this: describe "toMsec" do it "reruns the time in milliseconds" do assert true end context "when the time parameter is nil" do it "returns nil" do assert true end end end The interesting part about the Ruby implementation is that each it-block (the code between do/end) is executed in the context of an anonymous class instance. Each describe/context-block is turned in to a class, nested blocks inherit from the outer block. In D the implementation would look like this: class __toMsec { void __someUniqueName123 () { } class __SomeUniqueName456 : __toMsec { void __someUniqueName789 () { } } } Each it-block (unit test) will be executed in a new instance of the closest surrounding class. This means you can have helper methods and instance variables shared across multiple test, but they will each get a fresh copy of the data. Since the describe-blocks are implemented with classes that inherit you can override helper methods in subclasses. The unit test runner can also print out a documentation, basically all text in the "it" and "describe" parameters. Something like this: https://coderwall-assets-0.s3.amazonaws.com/uploads/picture/file/1949/rspec_html_screen.png -- /Jacob Carlborg
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 4:41 AM, Jacob Carlborg wrote:
 On 2014-04-30 22:41, Andrei Alexandrescu wrote:

 Yah I think that's possible but I'd like the name to be part of the
 function name as well e.g. unittest__%s.
Why is that necessary? To have the correct symbol name when debugging?
It's nice to have the name available in other tools (stack trace, debugger).
 The Ruby syntax looks like this:
[snip]
 The unit test runner can also print out a documentation, basically all
 text in the "it" and "describe" parameters. Something like this:
 https://coderwall-assets-0.s3.amazonaws.com/uploads/picture/file/1949/rspec_html_screen.png
That's all nice, but I feel we're going gung ho with overengineering already. If we give unittests names and then offer people a button "parallelize unittests" to push (don't even specify the number of threads! let the system figure it out depending on cores), that's a good step to a better world. Andrei
May 01 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-05-01 17:15, Andrei Alexandrescu wrote:

 That's all nice, but I feel we're going gung ho with overengineering
 already. If we give unittests names and then offer people a button
 "parallelize unittests" to push (don't even specify the number of
 threads! let the system figure it out depending on cores), that's a good
 step to a better world.
Sure. But on the other hand, why should D not have a great unit testing framework built-in. -- /Jacob Carlborg
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 11:49 AM, Jacob Carlborg wrote:
 On 2014-05-01 17:15, Andrei Alexandrescu wrote:

 That's all nice, but I feel we're going gung ho with overengineering
 already. If we give unittests names and then offer people a button
 "parallelize unittests" to push (don't even specify the number of
 threads! let the system figure it out depending on cores), that's a good
 step to a better world.
Sure. But on the other hand, why should D not have a great unit testing framework built-in.
It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei
May 01 2014
next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
On 5/1/14, 4:22 PM, Andrei Alexandrescu wrote:
 On 5/1/14, 11:49 AM, Jacob Carlborg wrote:
 On 2014-05-01 17:15, Andrei Alexandrescu wrote:

 That's all nice, but I feel we're going gung ho with overengineering
 already. If we give unittests names and then offer people a button
 "parallelize unittests" to push (don't even specify the number of
 threads! let the system figure it out depending on cores), that's a good
 step to a better world.
Sure. But on the other hand, why should D not have a great unit testing framework built-in.
It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei
What's the rush?
May 01 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 19:22:36 UTC, Andrei Alexandrescu 
wrote:
 On 5/1/14, 11:49 AM, Jacob Carlborg wrote:
 On 2014-05-01 17:15, Andrei Alexandrescu wrote:

 That's all nice, but I feel we're going gung ho with 
 overengineering
 already. If we give unittests names and then offer people a 
 button
 "parallelize unittests" to push (don't even specify the 
 number of
 threads! let the system figure it out depending on cores), 
 that's a good
 step to a better world.
Sure. But on the other hand, why should D not have a great unit testing framework built-in.
It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei
It is wrong approach. Proper one is to be able to define any sort of test running system in library code while still being 100% compatible with naive `dmd -unittest`. We are almost quite there, only step missing is transferring attributes to runtime unittest block reflection.
May 05 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/5/14, 8:16 AM, Dicebot wrote:
 On Thursday, 1 May 2014 at 19:22:36 UTC, Andrei Alexandrescu wrote:
 On 5/1/14, 11:49 AM, Jacob Carlborg wrote:
 On 2014-05-01 17:15, Andrei Alexandrescu wrote:

 That's all nice, but I feel we're going gung ho with overengineering
 already. If we give unittests names and then offer people a button
 "parallelize unittests" to push (don't even specify the number of
 threads! let the system figure it out depending on cores), that's a
 good
 step to a better world.
Sure. But on the other hand, why should D not have a great unit testing framework built-in.
It should. My focus is to get (a) unittest names and (b) parallel testing into the language ASAP. Andrei
It is wrong approach. Proper one is to be able to define any sort of test running system in library code while still being 100% compatible with naive `dmd -unittest`. We are almost quite there, only step missing is transferring attributes to runtime unittest block reflection.
Penalizing unittests that were bad in the first place is pretty attractive, but propagating attributes properly is even better. -- Andrei
May 05 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 30 Apr 2014 22:19:24 +0200
schrieb Jacob Carlborg <doob me.com>:

 __traits(getUnitTests) 
I've always wondered, but never asked: Doesn't __traits(getUnitTests) usage suffer from the same problem std.benchmark had? That in order to make it work for all modules you a) have to explicitly mention every module b) use module constructors, leading to module constructor dependency hell (afaik the main reason we don't have a std.benchmark now) ?
May 01 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 08:51:47 UTC, Johannes Pfau wrote:
 Am Wed, 30 Apr 2014 22:19:24 +0200
 schrieb Jacob Carlborg <doob me.com>:

 __traits(getUnitTests)
I've always wondered, but never asked: Doesn't __traits(getUnitTests) usage suffer from the same problem std.benchmark had? That in order to make it work for all modules you a) have to explicitly mention every module b) use module constructors, leading to module constructor dependency hell (afaik the main reason we don't have a std.benchmark now) ?
You only need to make sure all modules are transitively imported from initial one. Everything else can be done via recursive reflection with __traits(allMembers), module constructors are not needed either.
May 01 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-05-01 11:37, Dicebot wrote:

 You only need to make sure all modules are transitively imported from
 initial one.
The solution for that would be RMInfo [1], like RTInfo but for modules instead of types. [1] https://github.com/D-Programming-Language/dmd/pull/2271 -- /Jacob Carlborg
May 01 2014
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 01 May 2014 13:24:07 +0200
schrieb Jacob Carlborg <doob me.com>:

 On 2014-05-01 11:37, Dicebot wrote:
 
 You only need to make sure all modules are transitively imported
 from initial one.
The solution for that would be RMInfo [1], like RTInfo but for modules instead of types. [1] https://github.com/D-Programming-Language/dmd/pull/2271
solves the issue. Now you don't have to import the modules anymore, which is a step forward, but let's say I want to used this to find all SubClasses of a class. Now I can inspect that code in CTFE, but how do I build a list of all subclasses? In the most generic way (with DLLs/shared libraries) this can only work if code can be executed at runtime, so we'd need a way to emit module constructors AFAICS. These should be able to avoid the usual constructor dependency rules though.
May 01 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-05-01 14:00, Johannes Pfau wrote:


 solves the issue.

 Now you don't have to import the modules anymore, which is a step
 forward, but let's say I want to used this to find all SubClasses of a
 class.

 Now I can inspect that code in CTFE, but how do I build a list of all
 subclasses? In the most generic way (with DLLs/shared libraries) this
 can only work if code can be executed at runtime, so we'd need a way to
 emit module constructors AFAICS. These should be able to avoid the
 usual constructor dependency rules though.
RMInfo only helps with all the modules the compiler sees during a given compile run. -- /Jacob Carlborg
May 01 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 1 May 2014 at 08:51:47 UTC, Johannes Pfau wrote:
 Am Wed, 30 Apr 2014 22:19:24 +0200
 schrieb Jacob Carlborg <doob me.com>:

 __traits(getUnitTests)
I've always wondered, but never asked: Doesn't __traits(getUnitTests) usage suffer from the same problem std.benchmark had? That in order to make it work for all modules you a) have to explicitly mention every module b) use module constructors, leading to module constructor dependency hell (afaik the main reason we don't have a std.benchmark now) ?
You only need to make sure all modules are transitively imported from initial one. Everything else can be done via recursive reflection with __traits(allMembers), module constructors are not needed either. One can also generate special test entry module that imports all project modules using build system help.
May 01 2014
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 01 May 2014 09:38:51 +0000
schrieb "Dicebot" <public dicebot.lv>:

 On Thursday, 1 May 2014 at 08:51:47 UTC, Johannes Pfau wrote:
 
 You only need to make sure all modules are transitively imported
 from initial one. Everything else can be done via recursive
 reflection with __traits(allMembers), module constructors are not
 needed either.
 
 One can also generate special test entry module that imports all
 project modules using build system help.
Is there some boost licensed reflection code somewhere which shows how to do recursive reflection correctly? I know the basic idea but stuff like dealing with protection of members would take some time to figure that out. Maybe someone familiar with the details could sketch up a quick std.test proof-of-concept which parses the following (with all necessary protection checks + detecting unittest blocks in classes/structs): ----------- unittest { } _unittest void namedTest() { } ---------- Andrei do you think having to explicitly import modules to be tested is an issue? Apart from this limitation such an approach sounds great to me. There are quite some possibilities how user frameworks could extend std.test so it sounds like an interesting idea. (And std.benchmark could work in the same way)
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 4:31 AM, Johannes Pfau wrote:
  Andrei do you think having to explicitly import modules to be tested
 is an issue?
Well it kinda is. All that's written on the package is "unittest", we should add no fine print to it. -- Andrei
May 01 2014
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 01 May 2014 08:07:51 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 On 5/1/14, 4:31 AM, Johannes Pfau wrote:
  Andrei do you think having to explicitly import modules to be
 tested is an issue?
Well it kinda is. All that's written on the package is "unittest", we should add no fine print to it. -- Andrei
It'd be possible to make it work transparently, but that's much more work. We might need to do that at some point, as it's also necessary for std.benchmark and similar code but for now that's probably over-engineering. Here's the revived pull request: https://github.com/D-Programming-Language/dmd/pull/3518 https://github.com/D-Programming-Language/druntime/pull/782
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 12:05 PM, Johannes Pfau wrote:
 Am Thu, 01 May 2014 08:07:51 -0700
 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 On 5/1/14, 4:31 AM, Johannes Pfau wrote:
  Andrei do you think having to explicitly import modules to be
 tested is an issue?
Well it kinda is. All that's written on the package is "unittest", we should add no fine print to it. -- Andrei
It'd be possible to make it work transparently, but that's much more work. We might need to do that at some point, as it's also necessary for std.benchmark and similar code but for now that's probably over-engineering. Here's the revived pull request: https://github.com/D-Programming-Language/dmd/pull/3518 https://github.com/D-Programming-Language/druntime/pull/782
I'm unclear what this work does even after having read the description. What does it require in addition to just sprinkling unittests around? -- Andrei
May 01 2014
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 01 May 2014 12:26:07 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 On 5/1/14, 12:05 PM, Johannes Pfau wrote:
 Here's the revived pull request:
 https://github.com/D-Programming-Language/dmd/pull/3518
 https://github.com/D-Programming-Language/druntime/pull/782
I'm unclear what this work does even after having read the description. What does it require in addition to just sprinkling unittests around? -- Andrei
Nothing. This just changes the way druntime internally handles the unit tests, but nothing changes for the user: Right now we have one function per module, which calls all unit tests in these modules, but the test runner does not have access to the individual unittest functions. Instead of exposing one function per module, this now exposes every single unittest function. So you can now run every test individually or pass the function pointer to a different thread and run it there. Additionally this provides information about the source location of every unit test. I thought the example is quite clear? ------------- bool tester() { import std.stdio; //iterate all modules foreach(info; ModuleInfo) { //iterate unit test in modules foreach(test; info.unitTests) { //access unittest information writefln("ver=%s file=%s:%s disabled=%s func=%s", test.ver, test.file, test.line, test.disabled, test.func); //execute unittest test.func()(); } } return true; } shared static this() { Runtime.moduleUnitTester = &tester; } ------------- You customize the test runner just like you did before, by setting Runtime.moduleUnitTester in a module constructor. Now you still have to implement a custom test runner to run unittest in parallel, but that's trivial. We can of course add an implementation this to druntime, but we can't use std.parallelism there. What's a little more complicated is the versioning scheme, but that's an implementation detail. It ensures that we can change the exposed information (for example if we want to add a name field, or remove some field) without breaking anything.
May 01 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/1/14, 12:47 PM, Johannes Pfau wrote:
 Am Thu, 01 May 2014 12:26:07 -0700
 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 On 5/1/14, 12:05 PM, Johannes Pfau wrote:
 Here's the revived pull request:
 https://github.com/D-Programming-Language/dmd/pull/3518
 https://github.com/D-Programming-Language/druntime/pull/782
I'm unclear what this work does even after having read the description. What does it require in addition to just sprinkling unittests around? -- Andrei
Nothing. This just changes the way druntime internally handles the unit tests, but nothing changes for the user:
Great, thanks. Just making sure there's no unstated assumptions somewhere. Help with reviewing from the compiler and druntime folks would be appreciated! -- Andrei
May 01 2014
parent Johannes Pfau <nospam example.com> writes:
Am Thu, 01 May 2014 13:00:44 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 Nothing. This just changes the way druntime internally handles the
 unit tests, but nothing changes for the user:
Great, thanks. Just making sure there's no unstated assumptions somewhere. Help with reviewing from the compiler and druntime folks would be appreciated! -- Andrei
I added an parallel test runner example: http://dpaste.dzfl.pl/69baabd83e68 Output: -------- Test 2, Thread 7FE6F0EA4D00 Test 4, Thread 7FE6F0EA4E00 Test 7, Thread 7FE6F0EA4E00 Test 8, Thread 7FE6F0EA4E00 Test 9, Thread 7FE6F0EA4E00 Test 10, Thread 7FE6F0EA4E00 Test 1, Thread 7FE6F0EA4F00 Test 5, Thread 7FE6F0EA4B00 Test 3, Thread 7FE6F0EA4C00 Test 6, Thread 7FE6F0EA4D00
May 01 2014
prev sibling next sibling parent Andrej Mitrovic via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 4/30/14, Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 This brings up the issue of naming unittests.
See also this ER where I discuss why I wanted this recently: https://issues.dlang.org/show_bug.cgi?id=12473
Apr 30 2014
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 30 Apr 2014 11:43:31 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Hello,


 A coworker mentioned the idea that unittests could be run in parallel  
 (using e.g. a thread pool). I've rigged things to run in parallel  
 unittests across modules, and that works well. However, this is too  
 coarse-grained - it would be great if each unittest could be pooled  
 across the thread pool. That's more difficult to implement.
I am not sure, but are unit-test blocks one function each, or one function per module? If the latter, that would have to be changed.
 This brings up the issue of naming unittests. It's becoming increasingly  
 obvious that anonymous unittests don't quite scale - coworkers are  
 increasingly talking about "the unittest at line 2035 is failing" and  
 such. With unittests executing in multiple threads and issuing e.g.  
 logging output, this is only likely to become more exacerbated. We've  
 resisted named unittests but I think there's enough evidence to make the  
 change.
I would note this enhancement, which Walter agreed should be done at DConf '13 ;) https://issues.dlang.org/show_bug.cgi?id=10023 Jacob Carlborg has tried to make this work, but the PR has not been pulled yet (I think it needs some updating at least, and there were some unresolved questions IIRC).
 Last but not least, virtually nobody I know runs unittests and then  
 main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
     ...
 }

 I think it's time to change that. We could do it the  
 non-backward-compatible way by redefining -unittest to instruct the  
 compiler to not run main. Or we could define another flag such as  
 -unittest-only and then deprecate the existing one.
The runtime can intercept this parameter. I would like a mechanism to run main decided at runtime. We need no compiler modifications to effect this.
 Thoughts? Would anyone want to work on such stuff?
I can probably take a look at changing the unittests to avoid main without a runtime parameter. I have a good grasp on how the pre-main runtime works, having rewritten the module constructor algorithm a while back. I am hesitant to run all unit tests in parallel without an opt-out mechanism. The above enhancement being implemented would give us some ways to play around, though. -Steve
Apr 30 2014
prev sibling next sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 30/04/2014 17:43, Andrei Alexandrescu a crit :
 Hello,


 A coworker mentioned the idea that unittests could be run in parallel
 (using e.g. a thread pool). I've rigged things to run in parallel
 unittests across modules, and that works well. However, this is too
 coarse-grained - it would be great if each unittest could be pooled
 across the thread pool. That's more difficult to implement.
I think it's a great idea, mainly for TDD. I had experiment it with Java, and when execution time grow TDD loose rapidly his efficiently. Some Eclipse's plug-ins are able to run them in parallel if I remember correctly.
 This brings up the issue of naming unittests. It's becoming increasingly
 obvious that anonymous unittests don't quite scale - coworkers are
 increasingly talking about "the unittest at line 2035 is failing" and
 such. With unittests executing in multiple threads and issuing e.g.
 logging output, this is only likely to become more exacerbated. We've
 resisted named unittests but I think there's enough evidence to make the
 change.
IMO naming is important for reporting tools (tests status, benchmarks,...). Unittests evolves with the rest of the code.
 Last but not least, virtually nobody I know runs unittests and then
 main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
     ...
 }

 I think it's time to change that. We could do it the
 non-backward-compatible way by redefining -unittest to instruct the
 compiler to not run main. Or we could define another flag such as
 -unittest-only and then deprecate the existing one.

 Thoughts? Would anyone want to work on such stuff?


 Andrei
Apr 30 2014
parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 Last but not least, virtually nobody I know runs unittests and then
 main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
     ...
 }

 I think it's time to change that.
The current system of running unit tests prior to main is, in my opinion, fundamentally broken. Logically, the unit tests are a build step - something you do after compile to ensure things are good. Tying them to running main means I cannot have a build that passes unit tests that is also a production build. Granted, it is (as far as I know) impossible to actually compile a production version of code separately from the unittest code, and be able to run the one on the other. But it would be nice to move to something more in line with unittest-as-build-step, rather than -as-different-build. On named tests, I heartily support this. Especially if it comes with the ability to selectively run one test - such is incredibly useful for large projects, to quickly iterate on broken bits.
May 01 2014
prev sibling next sibling parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Wednesday, 30 April 2014 at 15:43:35 UTC, Andrei Alexandrescu 
wrote:
 This brings up the issue of naming unittests. It's becoming 
 increasingly obvious that anonymous unittests don't quite scale
A message structured like this would be awesome. Unittest Failed foo.d:345 Providing null input throws exception
 Last but not least, virtually nobody I know runs unittests and 
 then main. This is quickly becoming an idiom:

 version(unittest) void main() {}
 else void main()
 {
    ...
 }

 I think it's time to change that. We could do it the 
 non-backward-compatible way by redefining -unittest to instruct 
 the compiler to not run main. Or we could define another flag 
 such as -unittest-only and then deprecate the existing one.
I would like to see -unittest redefined.
Apr 30 2014
prev sibling next sibling parent "NVolcz" <volcz kth.se> writes:
The D unittest feature has been a mixed bag from the beginning 
for me.
When a codebase starts to consider to parallelize the unittests 
it's has in many cases become a very expensive to make this 
change. If order of execution was not guaranteed this would force 
coders to make a better long term investment from the beginning.
A nice side effect from having undefined order is that 
programmers are forced to think about state. See 
http://googletesting.blogspot.se/2013/03/testing-on-toilet-testing-state-vs.html

Another link I would like to drop here which is only midly 
relevant. I whish that more developers became aware of the Tests 
Vs. checks discussion.
http://www.satisfice.com/blog/archives/856
May 01 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 30/04/2014 16:43, Andrei Alexandrescu wrote:
 Hello,


 A coworker mentioned the idea that unittests could be run in parallel
 (using e.g. a thread pool).
There has been a lot of disagreement in this discussion of whether "unittests" blocks should run in parallel or not. Not everyone is agreeing with Andrei's idea in the first place. I am another in such position. True, like Dicebot, Russel, and others mentioned, a Unit Test should be a procedure with no side-effects (or side-effects independent from the other Unit Tests), and as such, able to run in parallel. Otherwise they are an Integration Test. But before we continue the discussion, we are missing am more basic assumption here: Do we want D to have a Unit-Testing facility, or a Testing facility?? In other words, do we want to be able to write automated tests that are Integration tests or just Unit tests? Because if we go with this option of making D unittest blocks run in parallel, we kill the option of them supporting Integration Tests. I don't think this is good. *Unit testing frameworks in other languages (JUnit for Java for example), provide full support for Integration tests, despite the "Unit" in their names. This is good. I think Integration tests are much more common than in "real-world" applications than people give credit for. Personally I find the distinction between Unit tests and Integrations tests not very useful in practice. It is accurate, but not very useful. In my mental model I don't make a distinction. I write a test that tests a component, or part of a component. The component might be a low-level component that depends on little or no other components - then I have a Unit test. Or it might be a higher level component that depends on other components (which might need to mocked in the test) - then I have an Integration test. But they are not different enough that a different framework should be necessary to write each of them. -- Bruno Medeiros https://twitter.com/brunodomedeiros
May 06 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 6 May 2014 at 15:54:30 UTC, Bruno Medeiros wrote:
 But before we continue the discussion, we are missing am more 
 basic assumption here: Do we want D to have a Unit-Testing 
 facility, or a Testing facility?? In other words, do we want to 
 be able to write automated tests that are Integration tests or 
 just Unit tests? Because if we go with this option of making D 
 unittest blocks run in parallel, we kill the option of them 
 supporting Integration Tests. I don't think this is good.
These days I often find myself leaning towards writing mostly integration tests with only limited amount of unit tests. But writing good integration test is very different from writing good unit test and usually implies quite lot of boilerplate. Truth is D does not currently have any higher-level facility at all. It has an _awesome_ unit test facility which gets often misused for writing sloppy integration tests. I'd love to keep existing facility as-is and think about providing good library augmentation for any sort of higher level approach. Key property of D unit tests is how easy it is to add those inline to existing project, unconstrained simplicity. In perfect world those are closer to contracts than other types of tests. This provides good basic sanity check for all modules you recursively import when run via `rdmd -unittest`. Good integration test is very different. It has certain assumptions about initial system state and notifies user if those are not met. It can take ages to run and can test real-world situations. It is not supposed to be run implicitly and frequently. You don't want to keep your integration tests inline because of amount of boilerplate code those usually need. I see no good in trying to unite those very different beasts and my experience with existing test libraries has been very unpleasant in that regard.
May 06 2014
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-05-06 19:58, Dicebot wrote:

 These days I often find myself leaning towards writing mostly
 integration tests with only limited amount of unit tests. But writing
 good integration test is very different from writing good unit test and
 usually implies quite lot of boilerplate. Truth is D does not currently
 have any higher-level facility at all. It has an _awesome_ unit test
 facility which gets often misused for writing sloppy integration tests.

 I'd love to keep existing facility as-is and think about providing good
 library augmentation for any sort of higher level approach.

 Key property of D unit tests is how easy it is to add those inline to
 existing project, unconstrained simplicity. In perfect world those are
 closer to contracts than other types of tests. This provides good basic
 sanity check for all modules you recursively import when run via `rdmd
 -unittest`.

 Good integration test is very different. It has certain assumptions
 about initial system state and notifies user if those are not met. It
 can take ages to run and can test real-world situations. It is not
 supposed to be run implicitly and frequently. You don't want to keep
 your integration tests inline because of amount of boilerplate code
 those usually need.

 I see no good in trying to unite those very different beasts and my
 experience with existing test libraries has been very unpleasant in that
 regard.
I don't see why would be bad to use "unittest" for integration tests, except for the misguided name. It's perfectly to place "unittest" is completely different modules and packages. They don't need to be placed inline. I see it as a code place to but code for testing. Then I don't have to come up with awkward names for regular functions. It's also a good palace since D doesn't allow to place statements and expression at module level. Sure there's module constructors but I don't think that's any better. -- /Jacob Carlborg
May 06 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 6 May 2014 at 18:28:27 UTC, Jacob Carlborg wrote:
d.
 I don't see why would be bad to use "unittest" for integration 
 tests, except for the misguided name. It's perfectly to place 
 "unittest" is completely different modules and packages. They 
 don't need to be placed inline.
Well I am actually guilty of doing exactly that because it allows me to merge coverage analysis files :) But it is not optimal situation once you consider something like parallel tests as compiler does not know which of those blocks are "true" unit tests. It also makes difficult to define a common "idiomatic" way to organize testing of D projects. I'd also love to see a test library that helps with defining integration tests structure (named tests grouped by common environment requirements doing automatic cleanup upon finishing the group/block) without resorting to custom classes AND without interfering with simplicity of existing unittests. I think it all can be done by keeping existing single "unittest" keyword but using various annotations. Then integration tests can be done as separate application that uses imaginary Phobos integration tests library to interpret those annotations and provide more complex test structure. And running plain `rdmd -unittest` on actual application modules will still continue to do the same good old thing.
May 06 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 06/05/14 20:39, Dicebot wrote:
 On Tuesday, 6 May 2014 at 18:28:27 UTC, Jacob Carlborg wrote:
 d.
 I don't see why would be bad to use "unittest" for integration tests,
 except for the misguided name. It's perfectly to place "unittest" is
 completely different modules and packages. They don't need to be
 placed inline.
Well I am actually guilty of doing exactly that because it allows me to merge coverage analysis files :) But it is not optimal situation once you consider something like parallel tests as compiler does not know which of those blocks are "true" unit tests. It also makes difficult to define a common "idiomatic" way to organize testing of D projects. I'd also love to see a test library that helps with defining integration tests structure (named tests grouped by common environment requirements doing automatic cleanup upon finishing the group/block) without resorting to custom classes AND without interfering with simplicity of existing unittests. I think it all can be done by keeping existing single "unittest" keyword but using various annotations. Then integration tests can be done as separate application that uses imaginary Phobos integration tests library to interpret those annotations and provide more complex test structure. And running plain `rdmd -unittest` on actual application modules will still continue to do the same good old thing.
So you're saying to use the "unittest" keyword but with a UDA? Something I already do, but for unit tests. Well my idea for a testing framework would work both for unit tests and other, higher levels of test. describe("toMsec") { it("returns the time in milliseconds") unittest { assert(true); } } -- /Jacob Carlborg
May 06 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 7 May 2014 at 06:34:44 UTC, Jacob Carlborg wrote:
 So you're saying to use the "unittest" keyword but with a UDA?
I think this is most reasonable compromise that does not harm existing system.
 Something I already do, but for unit tests. Well my idea for a 
 testing framework would work both for unit tests and other, 
 higher levels of test.

  describe("toMsec")
 {
      it("returns the time in milliseconds") unittest
     {
         assert(true);
     }
 }
Which is exactly why I'd like to defer exact annotation to library solution - exact requirements for such framework are very different. I'd want to see something like this instead: name("Network test 2") requires("Network test 1") cleanup!removeTemporaries unittest { // do stuff } Have never liked that fancy description syntax of "smart" testing frameworks.
May 07 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 07/05/14 16:05, Dicebot wrote:

 Have never liked that fancy description syntax of "smart" testing
 frameworks.
I hate plain unit test blocks with just a bunch of asserts. It's impossible to know that's being tested. -- /Jacob Carlborg
May 07 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, May 07, 2014 at 04:55:25PM +0200, Jacob Carlborg via Digitalmars-d
wrote:
 On 07/05/14 16:05, Dicebot wrote:
 
Have never liked that fancy description syntax of "smart" testing
frameworks.
I hate plain unit test blocks with just a bunch of asserts. It's impossible to know that's being tested.
[...] Huh? Isn't that what unittest blocks are about? To verify that certain assumed conditions are actually true at runtime? Verbal descriptions can be put in comments, if need be, can't they? T -- The right half of the brain controls the left half of the body. This means that only left-handed people are in their right mind. -- Manoj Srivastava
May 07 2014
next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 7 May 2014 at 15:07:20 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Wed, May 07, 2014 at 04:55:25PM +0200, Jacob Carlborg via 
 Digitalmars-d wrote:
 On 07/05/14 16:05, Dicebot wrote:
 
Have never liked that fancy description syntax of "smart" 
testing
frameworks.
I hate plain unit test blocks with just a bunch of asserts. It's impossible to know that's being tested.
[...] Huh? Isn't that what unittest blocks are about? To verify that certain assumed conditions are actually true at runtime? Verbal descriptions can be put in comments, if need be, can't they?
They can. But those descriptions are not included in failing test output. What I think Jacob might be getting to as well is that assertEquals or the more RSpec-like "foo.should equal 3" is more readable than the raw asserts. The context matters. In some frameworks that means using test names like testThatWhenIDoThisThenTheOtherThingActuallyHappens (which we'd get if we can have named unit tests), RSpec tries to be more readable but in the end it's all about: 1) Documenting what the code is supposed to do 2) Knowing what test failed and what it was testing Atila
May 07 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 7 May 2014 at 16:09:28 UTC, Atila Neves wrote:
 They can. But those descriptions are not included in failing 
 test output. What I think Jacob might be getting to as well is 
 that assertEquals or the more RSpec-like "foo.should equal 3" 
 is more readable than the raw asserts.

 The context matters. In some frameworks that means using test 
 names like testThatWhenIDoThisThenTheOtherThingActuallyHappens 
 (which we'd get if we can have named unit tests), RSpec tries 
 to be more readable but in the end it's all about:

 1) Documenting what the code is supposed to do
 2) Knowing what test failed and what it was testing
You don't need artificial pseudo syntax for that. assert!("==") + named tests is good enough to get the context and for detailed investigation you need file and line number anyway. Stuff like RSpec is extreme opposite of KISS.
May 08 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-05-08 14:56, Dicebot wrote:

 You don't need artificial pseudo syntax for that.
 assert!("==") + named tests is good enough to get the context and for
 detailed investigation you need file and line number anyway. Stuff like
 RSpec is extreme opposite of KISS.
RSpec uses a syntax that makes it easier to read a test. To understand what it actually tests. I mean, what the h*ll does this unit test tests: https://github.com/D-Programming-Language/phobos/blob/master/std/numeric.d#L995 I'm mostly interested in the describe/it functionality, not the fancy asserts, although I don't mind them either. describe("foo") { it("should do something useful") unittest { } } Is not so much different from what you suggested with named unit tests. -- /Jacob Carlborg
May 08 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 8 May 2014 at 18:54:30 UTC, Jacob Carlborg wrote:
 I mean, what the h*ll does this unit test tests:

 https://github.com/D-Programming-Language/phobos/blob/master/std/numeric.d#L995
It is explained in comments there. And it won't become more simple if you add some fancy syntax there. It looks complicated because it _is_ complicated, not because syntax is bad.
  describe("foo")
This is redundant as D unittest blocks are associated with symbols they are placed next to.
 {
      it("should do something useful") unittest {
This is essentially name with an overly smart name and weird attribute placement.
 Is not so much different from what you suggested with named 
 unit tests.
It introduces bunch of artificial annotations for something that can be taken care of by a single attribute as a side effect. Not KISS.
May 09 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-05-09 13:57, Dicebot wrote:

 This is redundant as D unittest blocks are associated with symbols they
 are placed next to.
I prefer to keep my tests in a separate directory.
 It introduces bunch of artificial annotations for something that can be
 taken care of by a single attribute as a side effect. Not KISS.
I just don't agree. It's a bit hard to do in D, but in the Ruby version, for each "describe" block an anonymous class is created. Nested blocks inherits from the outer blocks. The "it" blocks are evaluated inside an instance of the closest "describe" block. This makes it very nice to setup data for the tests. You can have helper methods in the "describe" blocks, overriding methods in the outer blocks and so on, very convenient. As a bonus, you can run the test with a special formatter. This will print all strings passed to the "describe" and "it" blocks in a structured way. Suddenly you can generate documentation of how you're system is supposed to work. -- /Jacob Carlborg
May 10 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-05-07 17:05, H. S. Teoh via Digitalmars-d wrote:

 Huh? Isn't that what unittest blocks are about? To verify that certain
 assumed conditions are actually true at runtime?

 Verbal descriptions can be put in comments, if need be, can't they?
What Atila said. -- /Jacob Carlborg
May 07 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 06/05/2014 18:58, Dicebot wrote:
 On Tuesday, 6 May 2014 at 15:54:30 UTC, Bruno Medeiros wrote:
 But before we continue the discussion, we are missing am more basic
 assumption here: Do we want D to have a Unit-Testing facility, or a
 Testing facility?? In other words, do we want to be able to write
 automated tests that are Integration tests or just Unit tests? Because
 if we go with this option of making D unittest blocks run in parallel,
 we kill the option of them supporting Integration Tests. I don't think
 this is good.
These days I often find myself leaning towards writing mostly integration tests with only limited amount of unit tests. But writing good integration test is very different from writing good unit test and usually implies quite lot of boilerplate. Truth is D does not currently have any higher-level facility at all. It has an _awesome_ unit test facility which gets often misused for writing sloppy integration tests.
Indeed: I also find myself writing more integration tests than unit tests, at least in the way I consider an integration tests to be (in some cases the distinction between integration and unit test may not be very easy or clear, IMO).
 I'd love to keep existing facility as-is and think about providing good
 library augmentation for any sort of higher level approach.
The unittest block is enough right now to support integration tests. To support common test fixture setup (akin to Before and After in xUnit), perhaps some syntax sugar could be added, although with D's language facilities (meta-programming, functional constructs, scope statements, etc.), we can do pretty well with existing functionality already.
 Good integration test is very different. It has certain assumptions
 about initial system state and notifies user if those are not met. It
 can take ages to run and can test real-world situations. It is not
 supposed to be run implicitly and frequently. You don't want to keep
 your integration tests inline because of amount of boilerplate code
 those usually need.
They are somewhat different. I wouldn't say very different. I don't agree integration tests usually take ages to run. Some of them can run fairly fast too, and are executed as frequently as unit tests. In DDT for example I always run unit tests the same time as integration tests. As I said, I don't find it useful to have a strict distinction between those. Rather, if I want to run a subset of tests, usually what I filter on is running only the tests of a certain plugin (DDT has 3 plugins), Java package, or Java class. Additionally the parser tests can be run in what I call "Lite" mode, which instead of running the full test suite, skips some of the heavyweight, parameterized tests, to make the suite run faster. Most of the cases are generated from templates, others are blind mass parse tests, such as parsing all source modules in Phobos. But what "Lite" mode cuts is not Integration tests, but rather the input sets of parameterized tests. As for keeping integration tests inline or not, yeah, you are likely to prefer putting them in a separate file. Doesn't mean we need a different language construct other than the unittest block for that.
 I see no good in trying to unite those very different beasts and my
 experience with existing test libraries has been very unpleasant in that
 regard.
What test libraries/frameworks have you used? -- Bruno Medeiros https://twitter.com/brunodomedeiros
May 07 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 7 May 2014 at 14:34:41 UTC, Bruno Medeiros wrote:
 On 06/05/2014 18:58, Dicebot wrote:
 I see no good in trying to unite those very different beasts 
 and my
 experience with existing test libraries has been very 
 unpleasant in that
 regard.
What test libraries/frameworks have you used?
I have C/C++ origins so it was mostly stuff like cppUnit, xUnit and Boost one as far as I can remember.
May 08 2014
parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 2014-05-08 at 13:03 +0000, Dicebot via Digitalmars-d wrote:
[…]
 I have C/C++ origins so it was mostly stuff like cppUnit, xUnit 
 and Boost one as far as I can remember.
The current C++ test framework front runner is probably Phil Nash's Catch https://github.com/philsquared/Catch -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 08 2014