digitalmars.D - unit tests with name and verbose report
- Danesh Daroui (15/15) Aug 04 2022 Hi all,
- Paul Backus (5/11) Aug 04 2022 There are alternative test runners that support these features.
- IGotD- (9/15) Aug 04 2022 I have asked for this as well and it is a simple addition that
- jmh530 (2/8) Aug 04 2022 Just use version(newFeature) unittest {}
- jfondren (44/57) Aug 04 2022 Here's a trivial, complete script:
- Danesh Daroui (17/77) Aug 05 2022 Thank you all for your answers.
- jmh530 (14/28) Aug 05 2022 If you don't have a main function, then you have to compile with
Hi all, These questions can be redundant. I searched in the forum but didn't find any final conclusion so I am asking them here. I have two questions: 1. Would it be possible to have "named" unit tests? Right now only anonymous unit tests ara apparently supported in D and when the tests are executed no detailed information is shown. I would like to see how many tests have been executed, how many passed and how many failed and complete names of the test for both passed and failed. 2. Does D support (natively) AI and machine learning and reasoning techniques? I mean something like backtracking, triple stores, managing a knowledge base, etc.? Thanks, Dan
Aug 04 2022
On Thursday, 4 August 2022 at 17:08:39 UTC, Danesh Daroui wrote:1. Would it be possible to have "named" unit tests? Right now only anonymous unit tests ara apparently supported in D and when the tests are executed no detailed information is shown. I would like to see how many tests have been executed, how many passed and how many failed and complete names of the test for both passed and failed.There are alternative test runners that support these features. The two most popular are [`unit-threaded`][1] and [`silly`][2]. [1]: https://code.dlang.org/packages/unit-threaded [2]: https://code.dlang.org/packages/silly
Aug 04 2022
On Thursday, 4 August 2022 at 17:08:39 UTC, Danesh Daroui wrote:1. Would it be possible to have "named" unit tests? Right now only anonymous unit tests ara apparently supported in D and when the tests are executed no detailed information is shown. I would like to see how many tests have been executed, how many passed and how many failed and complete names of the test for both passed and failed.I have asked for this as well and it is a simple addition that doesn't break the syntax Why is it important? Often you work on one isolated unittest and you just want run one for speed reasons and less output. Often you have several unittests in one file. I'm puzzled why the maintainers didn't want this early on as it would speed up the development.
Aug 04 2022
On Thursday, 4 August 2022 at 19:49:45 UTC, IGotD- wrote:[snip] Often you work on one isolated unittest and you just want run one for speed reasons and less output. Often you have several unittests in one file. I'm puzzled why the maintainers didn't want this early on as it would speed up the development.Just use version(newFeature) unittest {}
Aug 04 2022
On Thursday, 4 August 2022 at 20:01:00 UTC, jmh530 wrote:Just use version(newFeature) unittest {}Even if that would work, it is not an excuse for not implementing named unittests. This one of the things I dislike with the D project, low hanging fruit and obvious helpful features that never gets implemented.
Aug 05 2022
On Friday, 5 August 2022 at 16:01:26 UTC, IGotD- wrote:On Thursday, 4 August 2022 at 20:01:00 UTC, jmh530 wrote:What I discussed works and I use it all the time. For instance, here: https://github.com/libmir/mir-stat/blob/f61a82f741243c6054ad98e74a35a205b001931a/source/mir/stat/distribution/bernoulli.d#L36 And here is an example of the dub.sdl. https://github.com/libmir/mir-stat/blob/f61a82f741243c6054ad98e74a35a205b001931a/dub.sdl#L10 You'll notice that I've commented out a line //versions "mir_stat_test" "mir_stat_test_fp" where the "mir_stat_test_fp" takes longer to run. When I'm developing new features, I just create a new "mir_stat_test_newFeature" and comment out the "mir_stat_test" one. As you said, this lets me run these new UTs without running the rest of them. That's basically the feature that you want to use named unittests for. Before I commit, I replace the "mir_stat_test_newFeature" to "mir_stat_test" and make sure tests pass for everything.Just use version(newFeature) unittest {}Even if that would work, it is not an excuse for not implementing named unittests.This one of the things I dislike with the D project, low hanging fruit and obvious helpful features that never gets implemented.There's something to be said for orthogonal features...
Aug 05 2022
On Thursday, 4 August 2022 at 17:08:39 UTC, Danesh Daroui wrote:Hi all, These questions can be redundant. I searched in the forum but didn't find any final conclusion so I am asking them here. I have two questions: 1. Would it be possible to have "named" unit tests? Right now only anonymous unit tests ara apparently supported in D and when the tests are executed no detailed information is shown. I would like to see how many tests have been executed, how many passed and how many failed and complete names of the test for both passed and failed.Here's a trivial, complete script: ```d /++ dub.sdl: dflags "-preview=shortenedMethods" configuration "release" { targetType "executable" } configuration "unittest" { targetType "library" dependency "silly" version="~>1.1.1" } +/ int factorial(int n) => n <= 1 ? 1 : n * factorial(n - 1); ("!5") unittest { assert(factorial(5) == 120); } ("!0 and !1") unittest { assert(factorial(0) == 1); assert(factorial(1) == 1); } version (unittest) { } else { void main(string[] args) { import std.conv : to; import std.stdio : writeln; writeln(args[1].to!int.factorial); } } ``` Usage: ```d $ ./fact.d 10 3628800 $ dub -q test --single fact.d ✓ fact !5 ✓ fact !0 and !1 Summary: 2 passed, 0 failed in 0 ms ``` More elaborate unit testing with custom test runners is very nice in D actually, but it's slightly more work to set up.2. Does D support (natively) AI and machine learning and reasoning techniques? I mean something like backtracking, triple stores, managing a knowledge base, etc.?I'd start looking for that here: https://code.dlang.org/packages/mir
Aug 04 2022
On Thursday, 4 August 2022 at 20:18:08 UTC, jfondren wrote:On Thursday, 4 August 2022 at 17:08:39 UTC, Danesh Daroui wrote:Thank you all for your answers. The unittest with version(unittest) didn't work for me and I got link error from the compiler since it didn't find the main() function. Frankly, I didn't like the way to use a macro (version(X) is a macro, right?) to use such a simple feature. I think such report to indicate which test is running and either succeeded or failed is an essential feature and I am surprised how it is not included in the compiler yet. To me, anonymous unittests are rather "show off" of a language! :) The "mir" is a good one but I was looking for machine reasoning algorithms which you can implement an expert system based on rules and facts in a knowledge base. It seems that it is not implemented. Thanks. :) Regards, DanHi all, These questions can be redundant. I searched in the forum but didn't find any final conclusion so I am asking them here. I have two questions: 1. Would it be possible to have "named" unit tests? Right now only anonymous unit tests ara apparently supported in D and when the tests are executed no detailed information is shown. I would like to see how many tests have been executed, how many passed and how many failed and complete names of the test for both passed and failed.Here's a trivial, complete script: ```d /++ dub.sdl: dflags "-preview=shortenedMethods" configuration "release" { targetType "executable" } configuration "unittest" { targetType "library" dependency "silly" version="~>1.1.1" } +/ int factorial(int n) => n <= 1 ? 1 : n * factorial(n - 1); ("!5") unittest { assert(factorial(5) == 120); } ("!0 and !1") unittest { assert(factorial(0) == 1); assert(factorial(1) == 1); } version (unittest) { } else { void main(string[] args) { import std.conv : to; import std.stdio : writeln; writeln(args[1].to!int.factorial); } } ``` Usage: ```d $ ./fact.d 10 3628800 $ dub -q test --single fact.d ✓ fact !5 ✓ fact !0 and !1 Summary: 2 passed, 0 failed in 0 ms ``` More elaborate unit testing with custom test runners is very nice in D actually, but it's slightly more work to set up.2. Does D support (natively) AI and machine learning and reasoning techniques? I mean something like backtracking, triple stores, managing a knowledge base, etc.?I'd start looking for that here: https://code.dlang.org/packages/mir
Aug 05 2022
On Friday, 5 August 2022 at 12:16:25 UTC, Danesh Daroui wrote:[...] The unittest with version(unittest) didn't work for me and I got link error from the compiler since it didn't find the main() function.If you don't have a main function, then you have to compile with `--main`. I don't use `version(unittest)` unless I really really have to.Frankly, I didn't like the way to use a macro (version(X) is a macro, right?) to use such a simple feature. I think such report to indicate which test is running and either succeeded or failed is an essential feature and I am surprised how it is not included in the compiler yet. To me, anonymous unittests are rather "show off" of a language! :)It's conditional compilation, not a macro. The version(X) approach also requires you to specify it somewhere that this version is being called. I use the dub.sdl file, but you can do it from the command line too.The "mir" is a good one but I was looking for machine reasoning algorithms which you can implement an expert system based on rules and facts in a knowledge base. It seems that it is not implemented. Thanks. :)There are some tools that are built on `mir`, but it's not as well developed as some other languages. Shigeki Karita [1] has done some work here, like on `grain` or `tfd`. It's hard with less man power. There are also tools to call functions from other languages, like python or R. And you can also interface with C. [1] https://github.com/ShigekiKarita
Aug 05 2022