digitalmars.D - Let's improve the dmd tester.
- Stefan Koch (19/19) Jun 25 2020 Lately,
- H. S. Teoh (14/17) Jun 25 2020 [...]
- Andrei Alexandrescu (3/16) Jun 25 2020 Not to mention there seems to be a contest among CI engines on which
- jmh530 (2/16) Jun 25 2020 Sounds like an entrepreneurial opportunity to me.
- Jacob Carlborg (57/78) Jun 25 2020 The tests which are located in test/unit have a separate test runner
- Seb (14/22) Jun 25 2020 You do realize that you can run all tests locally?
- Stefan Koch (5/22) Jun 25 2020 Yes that would be the first things which should be fixed.
- Nils Lankila (5/22) Jun 25 2020 That's generally not true as between two runs you change the
- Nils Lankila (6/27) Jun 25 2020 A status line (same line updated, no new line) would be nice...
- Walter Bright (12/21) Jun 25 2020 One of my issues with this is:
- Seb (16/24) Jun 26 2020 That's entirely up to `rdmd`. It picks the compiler binary in
Lately, I have been working quite a lot on DMD again. (Thanks to my Employer who is using my experience). One thing which makes intrusive changes to DMD, hard to do, is the testing "script". Which executes and evaluates tests for DMD. It greets you with a wall of text, in which passing and failing tests are happily coexisting. Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process. I do believe that fixing that, will have a large positive impact on the stability and maintainability of dmd. Which is important since it's still the main semantic engine behind D. I am going to try and fix the tester, but help and suggestions are always welcome. Greetings, Stefan
Jun 25 2020
On Thu, Jun 25, 2020 at 02:39:18PM +0000, Stefan Koch via Digitalmars-d wrote: [...]Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process.[...] Seriously, why isn't there a CI service out there that allows you instant access to the text output *without* requiring you to use a browser on some over-complex, over-engineered JS-encumbered website? I mean, c'mon people, it's *plaintext* for crying out loud. We're not talking about 3D-accelerated FPS-in-a-browser here. Isn't there some URL that you can just curl to access the output instantly? T -- A mathematician learns more and more about less and less, until he knows everything about nothing; whereas a philospher learns less and less about more and more, until he knows nothing about everything.
Jun 25 2020
On 6/25/20 12:29 PM, H. S. Teoh wrote:On Thu, Jun 25, 2020 at 02:39:18PM +0000, Stefan Koch via Digitalmars-d wrote: [...]Not to mention there seems to be a contest among CI engines on which manages to make the error lines in a build more difficult to find.Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process.[...] Seriously, why isn't there a CI service out there that allows you instant access to the text output *without* requiring you to use a browser on some over-complex, over-engineered JS-encumbered website? I mean, c'mon people, it's *plaintext* for crying out loud. We're not talking about 3D-accelerated FPS-in-a-browser here. Isn't there some URL that you can just curl to access the output instantly?
Jun 25 2020
On Thursday, 25 June 2020 at 16:29:24 UTC, H. S. Teoh wrote:On Thu, Jun 25, 2020 at 02:39:18PM +0000, Stefan Koch via Digitalmars-d wrote: [...]Sounds like an entrepreneurial opportunity to me.Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process.[...] Seriously, why isn't there a CI service out there that allows you instant access to the text output *without* requiring you to use a browser on some over-complex, over-engineered JS-encumbered website? I mean, c'mon people, it's *plaintext* for crying out loud. We're not talking about 3D-accelerated FPS-in-a-browser here. Isn't there some URL that you can just curl to access the output instantly? T
Jun 25 2020
On 2020-06-25 16:39, Stefan Koch wrote:Lately, I have been working quite a lot on DMD again. (Thanks to my Employer who is using my experience). One thing which makes intrusive changes to DMD, hard to do, is the testing "script". Which executes and evaluates tests for DMD. It greets you with a wall of text, in which passing and failing tests are happily coexisting. Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process. I do believe that fixing that, will have a large positive impact on the stability and maintainability of dmd. Which is important since it's still the main semantic engine behind D. I am going to try and fix the tester, but help and suggestions are always welcome.The tests which are located in test/unit have a separate test runner (invoked by the main one) which only prints a summary when all tests are successful. You can invoke them explicitly using `test/run.d -u`: $ ./test/run.d -u unit_test_runner is already up-to-date 24 tests, 0 failures It has several advantages to the other test runner: * Compiles all tests into one executable, this will result in faster execution times * Since all tests are compiled into one executable it's much more scalable then the main test runner. One can freely create directories and files to organize the tests without slowing it down too much * Since the tests and the compiler runs in the same executable the tests have access to the compiler internals, make it much easier to test certain parts. For example, a test for the lexer does only need to invoke the lexer, not the whole compiler including code generation * All tests have a UDA which describes what each test does * It's possible to filter tests by passing only the files you would like to test: $ ./test/run.d -u test/unit/deinitialization.d unit_test_runner is already up-to-date 7 tests, 0 failures * It's possible to filter tests by UDA: $ ./test/run.d -u --filter Expression.deinitialize unit_test_runner is already up-to-date 1 tests, 0 failures * When a test fails it prints a nice output, including stacktrace, the UDAs attach to the tests and a summary of the failing tests: $ ./test/run.d -u test/unit/deinitialization.d unit_test_runner is already up-to-date Failures: 1) global.deinitialize core.exception.AssertError /Users/doob/development/d/dlang/dmd/test/unit/dei itialization.d(28): unittest failure ---------------- ??:? _d_unittestp [0x10f0687a1] /Users/doob/development/d/dlang/dmd/test/unit/deinitialization.d:28 void deinitialization.__unittest_L4_C1() [0x10edf4380] /Users/doob/development/d/dlang/dmd/test/unit/deinitialization.d:3 core.runtime.UnitTestResult runner.unitTestRunner() [0x10ee02b57] ??:? runModuleUnitTests [0x10f069182] ??:? void rt.dmain2._d_run_main2(char[][], ulong, extern (C) int function(char[][])*).runAll() [0x10f08436c] ??:? void rt.dmain2._d_run_main2(char[][], ulong, extern (C) int function(char[][])*).tryExec(scope void delegate()) [0x10f0842f8] ??:? _d_run_main2 [0x10f08425c] ??:? _d_run_main [0x10f083fd9] __main.d:1 main [0x10edf1e1d] ??:? start [0x7fff5f1473d4] 7 tests, 1 failures Failed tests: /Users/doob/development/d/dlang/dmd/test/unit/deinitialization.d:28 I recommend all tests in "tests/compilable" and "tests/fail_compilation" to be converted to the style of tests in "tests/unit". -- /Jacob Carlborg
Jun 25 2020
On Thursday, 25 June 2020 at 14:39:18 UTC, Stefan Koch wrote:One thing which makes intrusive changes to DMD, hard to do, is the testing "script". Which executes and evaluates tests for DMD.You do realize that you can run all tests locally? For example, `./test/run.d compilable` takes less than ten seconds on my machine. You can also execute only individual tests e.g. `./test/run.d compilable/dtoh_enum.d`It greets you with a wall of text, in which passing and failing tests are happily coexisting.The tester ensures that the log output of each test isn't mixed.Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process.So to summarize your "wishes": a) print error messages of failed tests at the end of the execution BTW for local testing you can just rerun the test script. It will only rerun the failing tests. b) hide messages for successful tests under the verbose flag Did I understand you correctly?
Jun 25 2020
On Thursday, 25 June 2020 at 18:15:57 UTC, Seb wrote:On Thursday, 25 June 2020 at 14:39:18 UTC, Stefan Koch wrote:Yes that would be the first things which should be fixed. The point is that I cannot find everything by running locally, for example I don't have a mac. And I don't use windows for development.[...]You do realize that you can run all tests locally? For example, `./test/run.d compilable` takes less than ten seconds on my machine. You can also execute only individual tests e.g. `./test/run.d compilable/dtoh_enum.d`[...]The tester ensures that the log output of each test isn't mixed.[...]So to summarize your "wishes": a) print error messages of failed tests at the end of the execution BTW for local testing you can just rerun the test script. It will only rerun the failing tests. b) hide messages for successful tests under the verbose flag Did I understand you correctly?
Jun 25 2020
On Thursday, 25 June 2020 at 18:15:57 UTC, Seb wrote:On Thursday, 25 June 2020 at 14:39:18 UTC, Stefan Koch wrote:That's generally not true as between two runs you change the compiler. Failed scripts are rerun when you fix a test, for example the TEST_OUTPUT section, which happens but less frequently,[...]You do realize that you can run all tests locally? For example, `./test/run.d compilable` takes less than ten seconds on my machine. You can also execute only individual tests e.g. `./test/run.d compilable/dtoh_enum.d`[...]The tester ensures that the log output of each test isn't mixed.[...]So to summarize your "wishes": a) print error messages of failed tests at the end of the execution BTW for local testing you can just rerun the test script. It will only rerun the failing tests.b) hide messages for successful tests under the verbose flag Did I understand you correctly?
Jun 25 2020
On Thursday, 25 June 2020 at 18:15:57 UTC, Seb wrote:On Thursday, 25 June 2020 at 14:39:18 UTC, Stefan Koch wrote:A status line (same line updated, no new line) would be nice... in the form of %s test(s) run, %d test(s) failed, %d test(s) passed and then the summary of failed tests at the end, to prevent the problem of interleaving.One thing which makes intrusive changes to DMD, hard to do, is the testing "script". Which executes and evaluates tests for DMD.You do realize that you can run all tests locally? For example, `./test/run.d compilable` takes less than ten seconds on my machine. You can also execute only individual tests e.g. `./test/run.d compilable/dtoh_enum.d`It greets you with a wall of text, in which passing and failing tests are happily coexisting.The tester ensures that the log output of each test isn't mixed.Which means that seeing which test failed for example on the output of Ci service, in which you can't do a grep, is a multi-second barrier in ones development process.So to summarize your "wishes": a) print error messages of failed tests at the end of the execution BTW for local testing you can just rerun the test script. It will only rerun the failing tests. b) hide messages for successful tests under the verbose flag
Jun 25 2020
On 6/25/2020 11:15 AM, Seb wrote:On Thursday, 25 June 2020 at 14:39:18 UTC, Stefan Koch wrote:One of my issues with this is: 1. how is the compiler version that is used to build test/run.d determined? 2. how is the compiler version that runs the test(s) determined? 3. how is the import path to druntime/phobos set for (1) and (2) ? Please don't answer here, add a PR to fix run.d and the corresponding README.md. https://issues.dlang.org/show_bug.cgi?id=20979 I've added the keyword "TestSuite" for test suite issues: https://issues.dlang.org/buglist.cgi?keywords=TestSuite&list_id=232009 Please add all test suite issues under that keyword. And my previous grumpy rant: https://digitalmars.com/d/archives/digitalmars/D/Serious_Problems_with_the_Test_Suite_339807.htmlOne thing which makes intrusive changes to DMD, hard to do, is the testing "script". Which executes and evaluates tests for DMD.You do realize that you can run all tests locally? For example, `./test/run.d compilable` takes less than ten seconds on my machine. You can also execute only individual tests e.g. `./test/run.d compilable/dtoh_enum.d`
Jun 25 2020
On Friday, 26 June 2020 at 01:04:50 UTC, Walter Bright wrote:One of my issues with this is: 1. how is the compiler version that is used to build test/run.d determined?That's entirely up to `rdmd`. It picks the compiler binary in your path, i.e. the host compiler you used to build dmd.2. how is the compiler version that runs the test(s) determined?There's _NO_ version determination. It uses the newly generated dmd binary that was freshly generated, i.e. it only looks into the `generated` folder. Details: https://github.com/dlang/dmd/blob/master/test/tools/paths.d3. how is the import path to druntime/phobos set for (1) and (2) ?As for all compilation in D: by DMD's configuration file (dmd.ini for Posix or sc.ini on Windows). build.d ensures that a proper config files get generated. DMD's configuration lookup sequence is far from ideal, but I gave up on improving it (see e.g. [1]) as I gave up on DMD for anything substantial. [1] https://github.com/dlang/dmd/pull/7915Please don't answer here, add a PR to fix run.d and the corresponding README.md.Well, then I better not answer.
Jun 26 2020