www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Examples block

reply Engine Machine <EM EM.com> writes:
We have a unittest, what about an examples?

Examples are self contained short programs, each block acts as a 
"main" function. One can run all the examples and spit out all 
the output consecutively. It also allows for more robust testing 
since it adds another layer.

It would provide better examples for docs also. Instead of using 
assert to assert stuff we can see a real program in action. The 
output of the example could easily be generated and added to the 
end of the code.

Seems like a win win! (maybe examples is not a good keyword, but 
that is moot)
Aug 20 2016
next sibling parent Dicebot <public dicebot.lv> writes:
On Saturday, 20 August 2016 at 20:39:13 UTC, Engine Machine wrote:
 We have a unittest, what about an examples?
https://dlang.org/spec/unittest.html#documented-unittests
Aug 20 2016
prev sibling next sibling parent reply Solomon E <default avatar.org> writes:
On Saturday, 20 August 2016 at 20:39:13 UTC, Engine Machine wrote:
 We have a unittest, what about an examples?

 Examples are self contained short programs, each block acts as 
 a "main" function. One can run all the examples and spit out 
 all the output consecutively. It also allows for more robust 
 testing since it adds another layer.

 It would provide better examples for docs also. Instead of 
 using assert to assert stuff we can see a real program in 
 action. The output of the example could easily be generated and 
 added to the end of the code.

 Seems like a win win! (maybe examples is not a good keyword, 
 but that is moot)
It seems like there could be a library function that compile-time-reflects to make a collection of all the functions in a module that have names starting with "maintest" and calls each of them in a try block, with a catch block that just prints the error messages to stderr and incorrect return codes and counts the total fails and finally counts the number of tests run to return the success ratio. That would be trivial to write in a language that has runtime reflection like Python. Probably someone who knows D well could write it for D in a few minutes. It's too hot here (in the Seattle area) right now for me with my limited D knowledge to feel like trying it instead of kibitzing. The point would be when writing something that has one development-stage 'main', other mains that could be used instead or are planned to be used or are just maintests would be live all the time (as long as the function that calls them as tests is included in the real main or, as it should probably be, in a non-release version block or in a unittest block.) A well-featured interface would allow calling maintests with simulated command line args specific for each one, or a list of lists of strings times a list of functions, args[][] times maintest*[], for more thorough testing. I've used a feature in NetBeans for Java (hobby project, not a pro dev here) that calls main for a configured build with stored args, but it required manual GUI interaction to switch between the builds. Also a good feature would allow a maintest "success" to be a user specified value of a return code for certain args, because when writing something that's supposed to return specific error codes, that functionality should be tested. hypothetical usage code: unittest { auto mainResults = Maintests([1: [["-x", "-y"], ["0"]], 2: [["-x", "-A"], ["3"]]); // -x and -A switches are incompatible args, to return 3 as an arbitrary error code assert(mainResults.fails == 1 && mainResults.tests["X"][2].success == false); } int maintestX (string[] args) { // TODO return 0; // it's going to fail the second test, on purpose } int maintestY (string[] args) { // TODO return 3 * (args[2] == "-A"); // laziest way to pass current tests } result: 4 tests run with each unittested release build, which would otherwise require two additional versions of the module containing main to be compiled and run with specific args 2 times each, with the correct return values differing for extra difficulty, or else require assert(maintestX(["-x", "-y"]) == 0); etc. to be written X4 with variations. The number of variations to write out manually would increase by the number more test cases times the number of variations of main, and wouldn't ensure that functions that look like "maintest" are tested. // ? void maintestZ () { } // would fail a test that specifies any return value other than 0 // ? int maintest() { throw new Exception("no catch"); } // would always fail This might be generalized to auto results = TestPattern!("maintest*", int[string[]])([["-a"]: 0]); which might have some more general uses. [I wrote the above, then I felt like, nah, I don't want to post things that sound like asking for other people to do more work. Now that I've done something on it myself, I'm posting the above for documentation.]
Aug 21 2016
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 21/08/2016 9:02 PM, Solomon E wrote:
 On Saturday, 20 August 2016 at 20:39:13 UTC, Engine Machine wrote:
 We have a unittest, what about an examples?

 Examples are self contained short programs, each block acts as a
 "main" function. One can run all the examples and spit out all the
 output consecutively. It also allows for more robust testing since it
 adds another layer.

 It would provide better examples for docs also. Instead of using
 assert to assert stuff we can see a real program in action. The output
 of the example could easily be generated and added to the end of the
 code.

 Seems like a win win! (maybe examples is not a good keyword, but that
 is moot)
It seems like there could be a library function that compile-time-reflects to make a collection of all the functions in a module that have names starting with "maintest" and calls each of them in a try block, with a catch block that just prints the error messages to stderr and incorrect return codes and counts the total fails and finally counts the number of tests run to return the success ratio. That would be trivial to write in a language that has runtime reflection like Python. Probably someone who knows D well could write it for D in a few minutes. It's too hot here (in the Seattle area) right now for me with my limited D knowledge to feel like trying it instead of kibitzing. The point would be when writing something that has one development-stage 'main', other mains that could be used instead or are planned to be used or are just maintests would be live all the time (as long as the function that calls them as tests is included in the real main or, as it should probably be, in a non-release version block or in a unittest block.) A well-featured interface would allow calling maintests with simulated command line args specific for each one, or a list of lists of strings times a list of functions, args[][] times maintest*[], for more thorough testing. I've used a feature in NetBeans for Java (hobby project, not a pro dev here) that calls main for a configured build with stored args, but it required manual GUI interaction to switch between the builds. Also a good feature would allow a maintest "success" to be a user specified value of a return code for certain args, because when writing something that's supposed to return specific error codes, that functionality should be tested. hypothetical usage code: unittest { auto mainResults = Maintests([1: [["-x", "-y"], ["0"]], 2: [["-x", "-A"], ["3"]]); // -x and -A switches are incompatible args, to return 3 as an arbitrary error code assert(mainResults.fails == 1 && mainResults.tests["X"][2].success == false); } int maintestX (string[] args) { // TODO return 0; // it's going to fail the second test, on purpose } int maintestY (string[] args) { // TODO return 3 * (args[2] == "-A"); // laziest way to pass current tests } result: 4 tests run with each unittested release build, which would otherwise require two additional versions of the module containing main to be compiled and run with specific args 2 times each, with the correct return values differing for extra difficulty, or else require assert(maintestX(["-x", "-y"]) == 0); etc. to be written X4 with variations. The number of variations to write out manually would increase by the number more test cases times the number of variations of main, and wouldn't ensure that functions that look like "maintest" are tested. // ? void maintestZ () { } // would fail a test that specifies any return value other than 0 // ? int maintest() { throw new Exception("no catch"); } // would always fail This might be generalized to auto results = TestPattern!("maintest*", int[string[]])([["-a"]: 0]); which might have some more general uses. [I wrote the above, then I felt like, nah, I don't want to post things that sound like asking for other people to do more work. Now that I've done something on it myself, I'm posting the above for documentation.]
Writing your own test runner isn't hard[0]. Although I'm not entirely sure this is even on the right continent as for purpose or scope. Either way while there might be a need here, this is going about it all wrong. After all, the documentation we do currently generate is for libraries not for whole programs which this is destined for and you're gonna wanna do it kinda custom as per the documentation itself. [0] https://github.com/dlang/druntime/blob/master/src/test_runner.d
Aug 21 2016
prev sibling parent Solomon E <default avatar.org> writes:
On Sunday, 21 August 2016 at 09:02:09 UTC, Solomon E wrote:
 On Saturday, 20 August 2016 at 20:39:13 UTC, Engine Machine 
 wrote:
 We have a unittest, what about an examples?
....
 It seems like there could be a library function that 
 compile-time-reflects to make a collection of all the functions 
 in a module that have names starting with "maintest" and calls 
 each of them in a try block, with a catch block that just 
 prints the error messages to stderr and incorrect return codes 
 and counts the total fails and finally counts the number of 
 tests run to return the success ratio.
[replying to my own post] So here's what I wrote on the idea in D, as it cooled down this evening. It was an interesting exercise in D style. I kept getting blocked by suggested features not being available (static foreach, enum string[], etc.) So I decided to cut it down to something a little simpler than I was going for, no imports or string mixins or traits or underscores, just the core D language. I think it shows there are enough features in D to get something done like building a test framework that allows multiplying the number of tests that can be run, at a small overhead in verbosity (i.e. where I had to repeat the function refs and function names.) module patterntester; struct PatternTestResults(returnT) { ResultsArray!(returnT)[string] funCases; int failures; int successes; int exceptions; int tries; int funs; ulong cases; } struct ResultsArray(returnT) { PatternTestResult!returnT[int] res; } enum SingleTestStatus { UNTESTED = 0, SUCCESS, FAILURE, EXCEPTION }; struct PatternTestResult(returnT) { returnT returned; Exception exception; SingleTestStatus status; } struct TestIO(argsT, returnT) { argsT arguments; returnT expect; this(argsT args, returnT ret) { arguments = args; expect = ret; } } immutable(Match!(argsT, returnT))[] FunctionGetter(string pattern, argsT, returnT)() { Match!(argsT, returnT)[] matches; foreach(num, fun; maintests.contents) { string sym = maintests.names[num]; if (pattern == sym[0 .. pattern.length]) { matches ~= Match!(argsT, returnT)(sym, fun); } } return matches.idup; } struct Match(argsT, returnT) { string funName; returnT function(argsT) funRef; this(string funStr, returnT function(argsT) funRefer) { funName = funStr; funRef = funRefer; } } PatternTestResults!returnT PatternTest(string pattern, argsT, returnT) (TestIO!(argsT, returnT)[int] tests) { enum Match!(argsT, returnT)[] matches = FunctionGetter!(pattern, argsT, returnT)(); PatternTestResults!returnT results; alias STS = SingleTestStatus; foreach(match; matches) { string funStr = match.funName; returnT function(argsT) funref = match.funRef; results.funCases[funStr] = ResultsArray!returnT(); foreach(tnum, testPair; tests) { argsT args = testPair.arguments; returnT expect = testPair.expect; auto singleResult = PatternTestResult!returnT(); try { auto exitcode = funref(args); if (exitcode != expect) { ++results.failures; singleResult.status = STS.FAILURE; } else { ++results.successes; singleResult.status = STS.SUCCESS; } singleResult.returned = exitcode; } catch (Exception ex) { ++results.failures; ++results.exceptions; singleResult.exception = ex; singleResult.status = STS.EXCEPTION; } finally { ++results.tries; } results.funCases[funStr].res[tnum] = singleResult; } ++results.funs; } results.cases = tests.length; assert(results.cases * results.funs == results.tries); return results; } struct Flist(argsT, returnT) { alias funt = returnT function(argsT); funt[] contents; string[] names; } enum maintests = Flist!(string[], int)([&maintestA, &maintestB, &maintestC, &maintestD], ["maintestA", "maintestB", "maintestC", "maintestD"]); int maintestA(string[] args) { return 0; } int maintestB(string[] args) { return 1; } int maintestC(string[] args) { throw new Exception("meant throw"); return 0; } int maintestD(string[] args) { return 3 * (args[$ - 1] == "-A"); } unittest { string df = "./a.out"; alias fntype = TestIO!(string[], int); // first test set: one success auto aresult = PatternTest!("maintestA", string[], int) ([1: fntype([df,"a","b"], 0)]); assert(aresult.failures == 0); assert(aresult.tries == 1); assert(aresult.exceptions == 0); assert(aresult.successes == 1); assert(aresult.funCases["maintestA"].res[1].returned == 0); assert(aresult.funCases["maintestA"].res[1].exception is null); alias STS = SingleTestStatus; assert(aresult.funCases["maintestA"].res[1].status == STS.SUCCESS); // second test set: one failure auto bresult = PatternTest!("maintestB", string[], int) ([1: fntype([df,"a","b"], 0)]); assert(bresult.failures == 1); assert(bresult.tries == 1); assert(bresult.exceptions == 0); assert(bresult.successes == 0); assert(bresult.funCases["maintestB"].res[1].returned == 1); assert(bresult.funCases["maintestB"].res[1].exception is null); // third test set: one exception auto cresult = PatternTest!("maintestC", string[], int) ([1: fntype([df,"a","b"], 0)]); assert(cresult.failures == 1); assert(cresult.tries == 1); assert(cresult.exceptions == 1); assert(cresult.successes == 0); assert(cresult.funCases["maintestC"].res[1].returned == int.init); assert(cresult.funCases["maintestC"].res[1].exception !is null); assert(cresult.funCases["maintestC"].res[1].exception.msg == "meant throw"); // fourth test set: multiplied tests auto dresult = PatternTest!("maintest", string[], int) ([1: fntype([df,"a","b"], 0), 2: fntype([df,"c","d"], 0), 3: fntype([df,"-x","-A"], 3)]); assert(dresult.failures == 7); assert(dresult.tries == 12); assert(dresult.exceptions == 3); assert(dresult.successes == 5); } void main(string[] args) { assert(args[0] == "./d.out"); }
Aug 21 2016
prev sibling parent ZombineDev <petar.p.kirov gmail.com> writes:
On Saturday, 20 August 2016 at 20:39:13 UTC, Engine Machine wrote:
 We have a unittest, what about an examples?

 Examples are self contained short programs, each block acts as 
 a "main" function. One can run all the examples and spit out 
 all the output consecutively. It also allows for more robust 
 testing since it adds another layer.

 It would provide better examples for docs also. Instead of 
 using assert to assert stuff we can see a real program in 
 action. The output of the example could easily be generated and 
 added to the end of the code.

 Seems like a win win! (maybe examples is not a good keyword, 
 but that is moot)
https://github.com/dlang/dlang.org/pull/1297
Aug 21 2016