digitalmars.D - Raising the bar on Phobos unittest coverage
- Walter Bright (80/80) Mar 17 2013 With the new -cov=nnn (not pulled yet) and -main, we can now add to the ...
- Jacob Carlborg (4/6) Mar 18 2013 What are these switches for?
- David Nadlinger (4/9) Mar 18 2013 Executing the program fails (non-zero exit code) if the coverage
- Jacob Carlborg (4/6) Mar 18 2013 Aha, I see. How do you get the percentage, with the standard -cov switch...
- bearophile (4/5) Mar 18 2013 If you use -cov it writes the coverage percentage at the end.
- Martin Nowak (4/7) Mar 18 2013 Can someone look into std.file's unittests. They use 60% (32s/50s) of
- Jonathan M Davis (8/19) Mar 18 2013 My first guess would be that you're running a different OS from Walter a...
- Walter Bright (5/10) Mar 18 2013 Code that is statically compiled out is not counted as executable code b...
- Jonathan M Davis (5/17) Mar 18 2013 Yeah, looking into it should be straigtforward. It just means taking the...
- monarch_dodra (44/61) Mar 18 2013 Is it though? It basically means that if you don't instantiate a
- Jonathan M Davis (4/78) Mar 18 2013 Good point. But it _is_ valuable to not have unused version blocks not c...
- Martin Nowak (8/16) Mar 18 2013 Sorry for the misunderstanding, but I really meant runtime as in time.
- Jonathan M Davis (10/31) Mar 18 2013 Yeah. The typical approach of testing each function individually is a bi...
- Jonas Drewsen (5/6) Mar 18 2013 Please note that std.net.curl needs to have PHOBOS_TEST_ALLOW_NET
- Walter Bright (3/6) Mar 18 2013 Perhaps that should be a -version=TEST_ALLOW_NET setting rather than an
- Jonas Drewsen (3/11) Mar 18 2013 Maybe.. I did it like that so I wouldn't have to recompile
- Jonathan M Davis (5/18) Mar 18 2013 The normal thing to do with unit tests when you want to have them enable...
- Walter Bright (4/7) Mar 18 2013 Also, a unittest executable is built and then run and then deleted. I se...
- Brad Roberts (10/18) Mar 18 2013 A super critical part of tests that involve networking.. make sure you'r...
- Walter Bright (2/11) Mar 18 2013 I agree. We need to comport ourselves as good net citizens.
- Joseph Rushton Wakeling (6/14) Jun 20 2013 I've submitted a pull request raising that to 91% :-)
- Andrei Alexandrescu (4/18) Jun 20 2013 Nice! Could you please submit an enhancement request?
- Joseph Rushton Wakeling (8/15) Jun 20 2013 Auto-testing failed, but I think not due to me :-(
- Jonathan M Davis (4/16) Jun 20 2013 It'll retry eventually (at least after changes have been merged into the...
- Joseph Rushton Wakeling (5/7) Jun 20 2013 Meh, no worries. I can always add a patch removing those superfluous
- Jacob Carlborg (6/11) Jun 21 2013 I thought that flag was supposed to make the test fail if the coverage
- Joseph Rushton Wakeling (10/13) Jun 21 2013 -cov=nnn tests aren't (AFAICS) implemented as part of make unittest, nor...
- Jacob Carlborg (8/17) Jun 21 2013 I would say that if the -cov=nnn flag works as I described you need to
- Walter Bright (4/11) Jun 21 2013 The main point of the bar is so there's an automated check for when it d...
- Joseph Rushton Wakeling (8/11) Jun 21 2013 I wasn't sure if you might allow some margin to allow for the fact that
- Jacob Carlborg (4/7) Jun 22 2013 Having both doesn't hurt.
- Joseph Rushton Wakeling (5/6) Jun 22 2013 Wasn't arguing against having the minimum coverage bar, I think it's a g...
- Jacob Carlborg (4/6) Jun 22 2013 It's per module and should be raised if the coverage is increased.
- Jonathan M Davis (4/10) Jun 20 2013 I believe that new commits in the pull (and possibly rebasing the pull) ...
- qznc (5/9) Jun 21 2013 Took that is a chance to become a contributor to Phobos and
- Jacob Carlborg (4/7) Jun 21 2013 A matching bugzilla is always a good idea. The changelog is built from t...
- Vladimir Panteleev (4/12) Jun 21 2013 Additional tests do not affect D users, though, so it's not as
With the new -cov=nnn (not pulled yet) and -main, we can now add to the build process a minimum bar for unit test coverage. The current state looks like this, the percentages are the current coverage amounts. It's not bad, but there's a lot of low hanging fruit ready for pull requests! From now on, I'd like such unittest coverage (and justification for low coverage) to be part of the minimum standard for all new phobos modules. $(DMD) -cov=83 -unittest -main -run std\stdio.d $(DMD) -cov=100 -unittest -main -run std\stdiobase.d $(DMD) -cov=95 -unittest -main -run std\string.d $(DMD) -cov=71 -unittest -main -run std\format.d $(DMD) -cov=83 -unittest -main -run std\file.d $(DMD) -cov=86 -unittest -main -run std\range.d $(DMD) -cov=95 -unittest -main -run std\array.d $(DMD) -cov=100 -unittest -main -run std\functional.d $(DMD) -cov=96 -unittest -main -run std\path.d $(DMD) -cov=41 -unittest -main -run std\outbuffer.d $(DMD) -cov=89 -unittest -main -run std\utf.d $(DMD) -cov=93 -unittest -main -run std\csv.d $(DMD) -cov=91 -unittest -main -run std\math.d $(DMD) -cov=95 -unittest -main -run std\complex.d $(DMD) -cov=70 -unittest -main -run std\numeric.d $(DMD) -cov=94 -unittest -main -run std\bigint.d $(DMD) -cov=100 -unittest -main -run std\metastrings.d $(DMD) -cov=95 -unittest -main -run std\bitmanip.d $(DMD) -cov=82 -unittest -main -run std\typecons.d $(DMD) -cov=44 -unittest -main -run std\uni.d $(DMD) -cov=91 -unittest -main -run std\base64.d $(DMD) -cov=99 -unittest -main -run std\md5.d $(DMD) -cov=0 -unittest -main -run std\ctype.d $(DMD) -cov=100 -unittest -main -run std\ascii.d $(DMD) -cov=0 -unittest -main -run std\demangle.d $(DMD) -cov=57 -unittest -main -run std\uri.d $(DMD) -cov=51 -unittest -main -run std\mmfile.d $(DMD) -cov=95 -unittest -main -run std\getopt.d $(DMD) -cov=92 -unittest -main -run std\signals.d $(DMD) -cov=100 -unittest -main -run std\typetuple.d $(DMD) -cov=85 -unittest -main -run std\traits.d $(DMD) -cov=62 -unittest -main -run std\encoding.d $(DMD) -cov=61 -unittest -main -run std\xml.d $(DMD) -cov=79 -unittest -main -run std\random.d $(DMD) -cov=92 -unittest -main -d -run std\exception.d $(DMD) -cov=73 -unittest -main -run std\concurrency.d $(DMD) -cov=95 -unittest -main -run std\datetime.d $(DMD) -cov=96 -unittest -main -run std\uuid.d $(DMD) -cov=100 -unittest -main -run std\digest\crc.d $(DMD) -cov=55 -unittest -main -run std\digest\sha.d $(DMD) -cov=100 -unittest -main -run std\digest\md.d $(DMD) -cov=100 -unittest -main -run std\digest\ripemd.d $(DMD) -cov=75 -unittest -main -run std\digest\digest.d $(DMD) -cov=95 -unittest -main -run std\algorithm.d $(DMD) -cov=83 -unittest -main -run std\variant.d $(DMD) -cov=0 -unittest -main -run std\syserror.d $(DMD) -cov=58 -unittest -main -run std\zlib.d $(DMD) -cov=54 -unittest -main -run std\stream.d $(DMD) -cov=53 -unittest -main -run std\socket.d $(DMD) -cov=0 -unittest -main -run std\socketstream.d $(DMD) -cov=88 -unittest -main -run std\container.d $(DMD) -cov=90 -unittest -main -d -run std\conv.d $(DMD) -cov=0 -unittest -main -run std\zip.d $(DMD) -cov=92 -unittest -main -run std\cstream.d $(DMD) -cov=77 -unittest -main -run std\regex.d $(DMD) -cov=92 -unittest -main -run std\json.d $(DMD) -cov=87 -unittest -main -run std\parallelism.d $(DMD) -cov=50 -unittest -main -run std\mathspecial.d $(DMD) -cov=71 -unittest -main -run std\process.d $(DMD) -cov=0 -unittest -main -run crc32.d $(DMD) -cov=70 -unittest -main -run std\net\isemail.d $(DMD) -cov=2 -unittest -main -run std\net\curl.d $(DMD) -cov=60 -unittest -main -run std\windows\registry.d $(DMD) -cov=55 -unittest -main -run std\internal\uni.d $(DMD) -cov=0 -unittest -main -run std\internal\uni_tab.d $(DMD) -cov=0 -unittest -main -run std\internal\digest\sha_SSSE3.d $(DMD) -cov=50 -unittest -main -run std\internal\math\biguintcore.d $(DMD) -cov=75 -unittest -main -run std\internal\math\biguintnoasm.d $(DMD) -cov=94 -unittest -main -run std\internal\math\gammafunction.d $(DMD) -cov=92 -unittest -main -run std\internal\math\errorfunction.d $(DMD) -cov=31 -unittest -main -run std\internal\windows\advapi32.d $(DMD) -cov=58 -unittest -main -run etc\c\zlib.d
Mar 17 2013
On 2013-03-18 02:00, Walter Bright wrote:With the new -cov=nnn (not pulled yet) and -main, we can now add to the build process a minimum bar for unit test coverage.What are these switches for? -- /Jacob Carlborg
Mar 18 2013
On Monday, 18 March 2013 at 09:48:11 UTC, Jacob Carlborg wrote:On 2013-03-18 02:00, Walter Bright wrote:Executing the program fails (non-zero exit code) if the coverage is less than the specified percentage. DavidWith the new -cov=nnn (not pulled yet) and -main, we can now add to the build process a minimum bar for unit test coverage.What are these switches for?
Mar 18 2013
On 2013-03-18 11:09, David Nadlinger wrote:Executing the program fails (non-zero exit code) if the coverage is less than the specified percentage.Aha, I see. How do you get the percentage, with the standard -cov switch? -- /Jacob Carlborg
Mar 18 2013
Jacob Carlborg:How do you get the percentage, with the standard -cov switch?If you use -cov it writes the coverage percentage at the end. Bye, bearophile
Mar 18 2013
On 03/18/2013 02:00 AM, Walter Bright wrote:From now on, I'd like such unittest coverage (and justification for low coverage) to be part of the minimum standard for all new phobos modules.Great.$(DMD) -cov=83 -unittest -main -run std\file.dCan someone look into std.file's unittests. They use 60% (32s/50s) of the unittest RUNtime on my machine.
Mar 18 2013
On Monday, March 18, 2013 17:34:12 Martin Nowak wrote:On 03/18/2013 02:00 AM, Walter Bright wrote:My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS. If that's the case, std.file is doomed to have lower code coverage because of the larger-than-average percentage of it which is OS-specific. The coverage should still be looked into and improved where possible, but Linux code is clearly never going to run on Windows or vice versa. - Jonathan M DavisFrom now on, I'd like such unittest coverage (and justification for low coverage) to be part of the minimum standard for all new phobos modules.Great.$(DMD) -cov=83 -unittest -main -run std\file.dCan someone look into std.file's unittests. They use 60% (32s/50s) of the unittest RUNtime on my machine.
Mar 18 2013
On 3/18/2013 11:12 AM, Jonathan M Davis wrote:On Monday, March 18, 2013 17:34:12 Martin Nowak wrote:Code that is statically compiled out is not counted as executable code by the coverage analyzer. As for looking into it, examine the report generated - std-file.lst - it'll tell you which lines were executed and which were not.Can someone look into std.file's unittests. They use 60% (32s/50s) of the unittest RUNtime on my machine.My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS.
Mar 18 2013
On Monday, March 18, 2013 12:09:47 Walter Bright wrote:On 3/18/2013 11:12 AM, Jonathan M Davis wrote:Then it was a bad guess. Good to know (and better behavior that way really).On Monday, March 18, 2013 17:34:12 Martin Nowak wrote:Code that is statically compiled out is not counted as executable code by the coverage analyzer.Can someone look into std.file's unittests. They use 60% (32s/50s) of the unittest RUNtime on my machine.My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS.As for looking into it, examine the report generated - std-file.lst - it'll tell you which lines were executed and which were not.Yeah, looking into it should be straigtforward. It just means taking the time to do it. - Jonathan M Davis
Mar 18 2013
On Monday, 18 March 2013 at 20:30:32 UTC, Jonathan M Davis wrote:On Monday, March 18, 2013 12:09:47 Walter Bright wrote:Is it though? It basically means that if you don't instantiate a template, it statically doesn't get compiled, so it is not counted: main.d: //-------- |void foo(T)() |{ | int i = 0; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; |} | |void main() |{ 1| int i; |} main.d is 100% covered //-------- main.d 100% covered my ass. I also call BS on std.algorithm's 95% coverage. I looked at the .lst. It is *roughly* covered, but not 95% covered. We need to be able to differentiate code that is stripped out due to "version" blocks (which legitimately doesn't count for the current build), and code that simply didn't get compiled because it WASN'T COVERED.On 3/18/2013 11:12 AM, Jonathan M Davis wrote:Then it was a bad guess. Good to know (and better behavior that way really).On Monday, March 18, 2013 17:34:12 Martin Nowak wrote:Code that is statically compiled out is not counted as executable code by the coverage analyzer.Can someone look into std.file's unittests. They use 60% (32s/50s) of the unittest RUNtime on my machine.My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS.
Mar 18 2013
On Monday, March 18, 2013 22:13:36 monarch_dodra wrote:On Monday, 18 March 2013 at 20:30:32 UTC, Jonathan M Davis wrote:Good point. But it _is_ valuable to not have unused version blocks not count negatively towards code coverage. - Jonathan M DavisOn Monday, March 18, 2013 12:09:47 Walter Bright wrote:Is it though? It basically means that if you don't instantiate a template, it statically doesn't get compiled, so it is not counted: main.d: //-------- |void foo(T)() |{ | | int i = 0; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | ++i; | |} | |void main() |{ 1| int i; |} main.d is 100% covered //-------- main.d 100% covered my ass. I also call BS on std.algorithm's 95% coverage. I looked at the .lst. It is *roughly* covered, but not 95% covered. We need to be able to differentiate code that is stripped out due to "version" blocks (which legitimately doesn't count for the current build), and code that simply didn't get compiled because it WASN'T COVERED.On 3/18/2013 11:12 AM, Jonathan M Davis wrote:Then it was a bad guess. Good to know (and better behavior that way really).On Monday, March 18, 2013 17:34:12 Martin Nowak wrote:Code that is statically compiled out is not counted as executable code by the coverage analyzer.Can someone look into std.file's unittests. They use 60% (32s/50s) of the unittest RUNtime on my machine.My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS.
Mar 18 2013
On 03/18/2013 09:30 PM, Jonathan M Davis wrote:Sorry for the misunderstanding, but I really meant runtime as in time. It's great that we're striving for more unittest coverage but we should keep an eye on the time it takes to run them. So I ran the executables with /bin/time and std.file accounts for 60% of the total runtime (not including compilation). Even though file IO is notoriously slow this seems excessive. https://github.com/D-Programming-Language/phobos/pull/653Can someone look into std.file's unittests. They use 60% (32s/50s) ofCode that is statically compiled out is not counted as executable code by the coverage analyzer.My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS.the unittest RUNtime on my machine.
Mar 18 2013
On Monday, March 18, 2013 23:44:15 Martin Nowak wrote:On 03/18/2013 09:30 PM, Jonathan M Davis wrote:Yeah. The typical approach of testing each function individually is a bit expensive with std.file. It really should be refactored so that it does everything in only a few unittest blocks rather than a bunch of them so that it minimizes how much it repeats I/O operations. I've been meaning to go through it and do that but haven't gotten around to it yet. Thanks for reminding me though. I'd forgotten about it. Still, 60% of the total runtime seems awfully high. I don't recall every getting the impression that anything like that was going on. - Jonathan M DavisSorry for the misunderstanding, but I really meant runtime as in time. It's great that we're striving for more unittest coverage but we should keep an eye on the time it takes to run them. So I ran the executables with /bin/time and std.file accounts for 60% of the total runtime (not including compilation). Even though file IO is notoriously slow this seems excessive. https://github.com/D-Programming-Language/phobos/pull/653Can someone look into std.file's unittests. They use 60% (32s/50s) ofCode that is statically compiled out is not counted as executable code by the coverage analyzer.My first guess would be that you're running a different OS from Walter and that OS-specific code counts as not being run when you run it on a different OS.the unittest RUNtime on my machine.
Mar 18 2013
Please note that std.net.curl needs to have PHOBOS_TEST_ALLOW_NET env variable defined or else it will early out on most unittests. I guess that will increase the coverage from 2 to something more sane. -Jonas$(DMD) -cov=2 -unittest -main -run std\net\curl.d
Mar 18 2013
On 3/18/2013 2:09 PM, Jonas Drewsen wrote:Please note that std.net.curl needs to have PHOBOS_TEST_ALLOW_NET env variable defined or else it will early out on most unittests. I guess that will increase the coverage from 2 to something more sane.Perhaps that should be a -version=TEST_ALLOW_NET setting rather than an environment variable.
Mar 18 2013
On Monday, 18 March 2013 at 22:06:55 UTC, Walter Bright wrote:On 3/18/2013 2:09 PM, Jonas Drewsen wrote:Maybe.. I did it like that so I wouldn't have to recompile everytime I needed to change that flag.Please note that std.net.curl needs to have PHOBOS_TEST_ALLOW_NET env variable defined or else it will early out on most unittests. I guess that will increase the coverage from 2 to something more sane.Perhaps that should be a -version=TEST_ALLOW_NET setting rather than an environment variable.
Mar 18 2013
On Monday, March 18, 2013 23:58:59 Jonas Drewsen wrote:On Monday, 18 March 2013 at 22:06:55 UTC, Walter Bright wrote:The normal thing to do with unit tests when you want to have them enabled on part of the time is to use a version block. And you have to recompile whenever you make changes anyway, so I wouldn't expect it to be a big deal in general. - Jonathan M DavisOn 3/18/2013 2:09 PM, Jonas Drewsen wrote:Maybe.. I did it like that so I wouldn't have to recompile everytime I needed to change that flag.Please note that std.net.curl needs to have PHOBOS_TEST_ALLOW_NET env variable defined or else it will early out on most unittests. I guess that will increase the coverage from 2 to something more sane.Perhaps that should be a -version=TEST_ALLOW_NET setting rather than an environment variable.
Mar 18 2013
On 3/18/2013 4:08 PM, Jonathan M Davis wrote:The normal thing to do with unit tests when you want to have them enabled on part of the time is to use a version block. And you have to recompile whenever you make changes anyway, so I wouldn't expect it to be a big deal in general.Also, a unittest executable is built and then run and then deleted. I see no point to keeping it around and running it in different ways, hence no real value in passing parameters to it via an environment variable.
Mar 18 2013
A super critical part of tests that involve networking.. make sure you're pointing at resources that you own. Pointing at even a popular site that you're sure can handle the load, is just rude. The auto-tester currently runs the full build/test cycle a little over 1200 times a day these days, or roughly once a minute. Multiply that by however many remote calls it makes. And that's just the tester.. it doesn't include the ad-hoc test runs that people do. Nor does it count whatever auto-testing is done for gdc or ldc. On Mon, 18 Mar 2013, Jonas Drewsen wrote:Please note that std.net.curl needs to have PHOBOS_TEST_ALLOW_NET env variable defined or else it will early out on most unittests. I guess that will increase the coverage from 2 to something more sane. -Jonas$(DMD) -cov=2 -unittest -main -run std\net\curl.d
Mar 18 2013
On 3/18/2013 5:33 PM, Brad Roberts wrote:A super critical part of tests that involve networking.. make sure you're pointing at resources that you own. Pointing at even a popular site that you're sure can handle the load, is just rude. The auto-tester currently runs the full build/test cycle a little over 1200 times a day these days, or roughly once a minute. Multiply that by however many remote calls it makes. And that's just the tester.. it doesn't include the ad-hoc test runs that people do. Nor does it count whatever auto-testing is done for gdc or ldc.I agree. We need to comport ourselves as good net citizens.
Mar 18 2013
On 03/18/2013 01:00 AM, Walter Bright wrote:With the new -cov=nnn (not pulled yet) and -main, we can now add to the build process a minimum bar for unit test coverage. The current state looks like this, the percentages are the current coverage amounts. It's not bad, but there's a lot of low hanging fruit ready for pull requests! $(DMD) -cov=79 -unittest -main -run std\random.dI've submitted a pull request raising that to 91% :-) Can I suggest tweaking the make files to print code coverage percentages when running the Phobos unittests? Would be useful to check if there are any unexpected drops or just to have a quick glance where could be useful to try and help out.
Jun 20 2013
On 6/20/13 5:19 PM, Joseph Rushton Wakeling wrote:On 03/18/2013 01:00 AM, Walter Bright wrote:Awes.With the new -cov=nnn (not pulled yet) and -main, we can now add to the build process a minimum bar for unit test coverage. The current state looks like this, the percentages are the current coverage amounts. It's not bad, but there's a lot of low hanging fruit ready for pull requests! $(DMD) -cov=79 -unittest -main -run std\random.dI've submitted a pull request raising that to 91% :-)Can I suggest tweaking the make files to print code coverage percentages when running the Phobos unittests? Would be useful to check if there are any unexpected drops or just to have a quick glance where could be useful to try and help out.Nice! Could you please submit an enhancement request? Andrei
Jun 20 2013
On 06/20/2013 10:59 PM, Andrei Alexandrescu wrote:Awes.Auto-testing failed, but I think not due to me :-( setting up remote topull -> https://github.com/WebDrake/phobos.git fetching contents of https://github.com/WebDrake/phobos.git error: RPC failed; result=22, HTTP code = 405 fatal: The remote end hung up unexpectedly Does it re-try in the event of pull failures like this?Done: http://d.puremagic.com/issues/show_bug.cgi?id=10430Can I suggest tweaking the make files to print code coverage percentages when running the Phobos unittests? Would be useful to check if there are any unexpected drops or just to have a quick glance where could be useful to try and help out.Nice! Could you please submit an enhancement request?
Jun 20 2013
On Thursday, June 20, 2013 23:14:42 Joseph Rushton Wakeling wrote:On 06/20/2013 10:59 PM, Andrei Alexandrescu wrote:It'll retry eventually (at least after changes have been merged into the main repository), but if it's been failing, then it's at the bottom of the queue. - Jonathan M DavisAwes.Auto-testing failed, but I think not due to me :-( setting up remote topull -> https://github.com/WebDrake/phobos.git fetching contents of https://github.com/WebDrake/phobos.git error: RPC failed; result=22, HTTP code = 405 fatal: The remote end hung up unexpectedly Does it re-try in the event of pull failures like this?
Jun 20 2013
On 06/20/2013 11:41 PM, Jonathan M Davis wrote:It'll retry eventually (at least after changes have been merged into the main repository), but if it's been failing, then it's at the bottom of the queue.Meh, no worries. I can always add a patch removing those superfluous assert(false)s and take it back to the top, right? :-) No, not right away. If by tomorrow afternoon it's still languishing I might think about it. :-P
Jun 20 2013
On 2013-06-20 23:19, Joseph Rushton Wakeling wrote:I've submitted a pull request raising that to 91% :-) Can I suggest tweaking the make files to print code coverage percentages when running the Phobos unittests? Would be useful to check if there are any unexpected drops or just to have a quick glance where could be useful to try and help out.I thought that flag was supposed to make the test fail if the coverage gets lower than specified? It would be nice to have the value printed to see if the coverage is increased. -- /Jacob Carlborg
Jun 21 2013
On 06/21/2013 09:33 AM, Jacob Carlborg wrote:I thought that flag was supposed to make the test fail if the coverage gets lower than specified? It would be nice to have the value printed to see if the coverage is increased.-cov=nnn tests aren't (AFAICS) implemented as part of make unittest, nor is plain -cov. I think a minimum acceptable threshold is necessary but not sufficient -- say your minimum code coverage is 85%, it's still most likely unacceptable if your coverage drops (say) from 92% to 87%. Anyway, the main benefit I see in printing the percentages isn't for testing purposes (though it's handy) but in advertising the existence and usefulness of code coverage analysis, and giving developers a nudge as to where and what to work on :-)
Jun 21 2013
On 2013-06-21 11:26, Joseph Rushton Wakeling wrote:-cov=nnn tests aren't (AFAICS) implemented as part of make unittest, nor is plain -cov.I thought that was the idea. Perhaps it's not finished yet.I think a minimum acceptable threshold is necessary but not sufficient -- say your minimum code coverage is 85%, it's still most likely unacceptable if your coverage drops (say) from 92% to 87%.I would say that if the -cov=nnn flag works as I described you need to update the percent as soon as you increase the coverage. In the above case the flag should have said -cov=92 and not 85.Anyway, the main benefit I see in printing the percentages isn't for testing purposes (though it's handy) but in advertising the existence and usefulness of code coverage analysis, and giving developers a nudge as to where and what to work on :-)I don't mind it :) It's better if it can be enforced. -- /Jacob Carlborg
Jun 21 2013
On 6/21/2013 2:26 AM, Joseph Rushton Wakeling wrote:I think a minimum acceptable threshold is necessary but not sufficient -- say your minimum code coverage is 85%, it's still most likely unacceptable if your coverage drops (say) from 92% to 87%.If your minimum acceptable coverage is 92%, why list it as 85%? ????Anyway, the main benefit I see in printing the percentages isn't for testing purposes (though it's handy) but in advertising the existence and usefulness of code coverage analysis, and giving developers a nudge as to where and what to work on :-)The main point of the bar is so there's an automated check for when it drops. You don't have to manually look, which will never happen.
Jun 21 2013
On 06/21/2013 09:42 PM, Walter Bright wrote:If your minimum acceptable coverage is 92%, why list it as 85%? ????I wasn't sure if you might allow some margin to allow for the fact that introducing new functionality might introduce a drop in overall code coverage (I found not all "failures" of code coverage are avoidable and not all of them are real failures).The main point of the bar is so there's an automated check for when it drops. You don't have to manually look, which will never happen.I just know that if every time I build Phobos I get a report that mentions that std.somemodule has only 53% code coverage, I might start to feel an obligation to do something about that. ;-)
Jun 21 2013
On 2013-06-22 00:29, Joseph Rushton Wakeling wrote:I just know that if every time I build Phobos I get a report that mentions that std.somemodule has only 53% code coverage, I might start to feel an obligation to do something about that. ;-)Having both doesn't hurt. -- /Jacob Carlborg
Jun 22 2013
On 06/22/2013 10:52 AM, Jacob Carlborg wrote:Having both doesn't hurt.Wasn't arguing against having the minimum coverage bar, I think it's a good idea. :-) I did misinterpret it a bit, though, as being a common minimum applied to all modules, not a per-module bar.
Jun 22 2013
On 2013-06-22 13:01, Joseph Rushton Wakeling wrote:I did misinterpret it a bit, though, as being a common minimum applied to all modules, not a per-module bar.It's per module and should be raised if the coverage is increased. -- /Jacob Carlborg
Jun 22 2013
On Thursday, June 20, 2013 23:50:00 Joseph Rushton Wakeling wrote:On 06/20/2013 11:41 PM, Jonathan M Davis wrote:I believe that new commits in the pull (and possibly rebasing the pull) do cause a pull request to go up in priority in the pull tester. - Jonathan M DavisIt'll retry eventually (at least after changes have been merged into the main repository), but if it's been failing, then it's at the bottom of the queue.Meh, no worries. I can always add a patch removing those superfluous assert(false)s and take it back to the top, right? :-)
Jun 20 2013
On Monday, 18 March 2013 at 01:00:43 UTC, Walter Bright wrote:The current state looks like this, the percentages are the current coverage amounts. It's not bad, but there's a lot of low hanging fruit ready for pull requests! $(DMD) -cov=57 -unittest -main -run std\uri.dTook that is a chance to become a contributor to Phobos and submitted a pull request [0]. Am I supposed to file something in Bugzilla or just wait for someone to look at my request? [0] https://github.com/D-Programming-Language/phobos/pull/1359
Jun 21 2013
On 2013-06-21 11:13, qznc wrote:Took that is a chance to become a contributor to Phobos and submitted a pull request [0]. Am I supposed to file something in Bugzilla or just wait for someone to look at my request?A matching bugzilla is always a good idea. The changelog is built from that. -- /Jacob Carlborg
Jun 21 2013
On Friday, 21 June 2013 at 10:11:44 UTC, Jacob Carlborg wrote:On 2013-06-21 11:13, qznc wrote:Additional tests do not affect D users, though, so it's not as interesting to have on the changelog as actual fixes or enhancements.Took that is a chance to become a contributor to Phobos and submitted a pull request [0]. Am I supposed to file something in Bugzilla or just wait for someone to look at my request?A matching bugzilla is always a good idea. The changelog is built from that.
Jun 21 2013