www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D-thrift package detects regressions since 2.061, where is the

reply "glycerine" <noreply noreply.com> writes:
Grrr...

Apparently nobody has been testing the D - Apache Thrift bindings
since 2.061, and dmd has since accumulated multiple regressions
that affect the correctness of the Thrift implementation. I
emailed with David N. and he said that this was quite common for
each release of dmd, and that while he used to religously
evaluate each new dmd release on the Thrift bindings, he had
simply not had the time for more recent recents to test each
thoroughly.

Serialization: this is fundamental. This really isn't the kind of
thing that should ever be allowed to break. Hence it really isn't
something that should be tested manually. It should be an
automatic part of the automatic regression detection test suite.

Where is the regression suite for D located, and how can I add to
it?

There used to be github issue tracking, but I don't see it any
more... is it hiding under their new interface perhaps...?

Thanks.

- glycerine
Aug 13 2013
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, August 13, 2013 19:49:51 glycerine wrote:
 Grrr...
 
 Apparently nobody has been testing the D - Apache Thrift bindings
 since 2.061, and dmd has since accumulated multiple regressions
 that affect the correctness of the Thrift implementation. I
 emailed with David N. and he said that this was quite common for
 each release of dmd, and that while he used to religously
 evaluate each new dmd release on the Thrift bindings, he had
 simply not had the time for more recent recents to test each
 thoroughly.
 
 Serialization: this is fundamental. This really isn't the kind of
 thing that should ever be allowed to break. Hence it really isn't
 something that should be tested manually. It should be an
 automatic part of the automatic regression detection test suite.
 
 Where is the regression suite for D located, and how can I add to
 it?
We do not include 3rd party libraries or projects in any kind of regression suite, so if that's what you're looking for, you're out of luck. David or someone else working on the Thrift stuff would have had to set something up for the D Thrift stuff specifically. We do have an autotester which tests that the current compiler and standard library implementation pass their tests, as well as a tester which tests that pull requests pass the tests as well, and that can be found here: http://d.puremagic.com/test-results/ If you report bugs in bugzilla, then when they are fixed, unit tests for those bugs will be added for them so that they won't fail again. Our bugzilla can be found here: http://d.puremagic.com/issues.
 There used to be github issue tracking, but I don't see it any
 more... is it hiding under their new interface perhaps...?
We've never used github issue tracking for either the compiler or D's standard libraries. Maybe they were enabled before, but if so, they were ignored. I don't know what the D Thrift stuff does though. - Jonathan M Davis
Aug 13 2013
prev sibling parent reply Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
13.08.2013 21:49, glycerine пишет:
 Grrr...

 Apparently nobody has been testing the D - Apache Thrift bindings
 since 2.061, and dmd has since accumulated multiple regressions
 that affect the correctness of the Thrift implementation. I
 emailed with David N. and he said that this was quite common for
 each release of dmd, and that while he used to religously
 evaluate each new dmd release on the Thrift bindings, he had
 simply not had the time for more recent recents to test each
 thoroughly.

 Serialization: this is fundamental. This really isn't the kind of
 thing that should ever be allowed to break. Hence it really isn't
 something that should be tested manually. It should be an
 automatic part of the automatic regression detection test suite.

 Where is the regression suite for D located, and how can I add to
 it?

 There used to be github issue tracking, but I don't see it any
 more... is it hiding under their new interface perhaps...?

 Thanks.

 - glycerine
By the way, the ability to add costume projects to D autotester is already proposed without any response: http://forum.dlang.org/thread/kqm4ta$m7f$1 digitalmars.com -- Денис В. Шеломовский Denis V. Shelomovskij
Aug 23 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/23/2013 10:34 AM, Denis Shelomovskij wrote:
 By the way, the ability to add costume projects to D autotester is already
 proposed without any response:
 http://forum.dlang.org/thread/kqm4ta$m7f$1 digitalmars.com
The question comes up repeatedly, and I've answered it repeatedly, the latest on 8/20 in the thread "std.serialization: pre-voting review / discussion". Here's the message: --------------------------------- On 8/18/2013 9:33 AM, David Nadlinger wrote:
 Having a system that regularly, automatically runs the test suites of several
 larger, well-known D projects with the results being readily available to the
 DMD/druntime/Phobos teams would certainly help. But it's also not ideal, since
 if a project starts to fail, the exact nature of the issue (regression in DMD
or
 bug in the project, and if the former, a minimal test case) can often be hard
to
 track down for somebody not already familiar with the code base.
That's exactly the problem. If these large projects are incorporated into the autotester, who is going to isolate/fix problems arising with them? The test suite is designed to be a collection of already-isolated issues, so understanding what went wrong shouldn't be too difficult. Note that already it is noticeably much harder to debug a phobos unit test gone awry than the other tests. A full blown project that nobody understands would fare far worse. (And the other problem, of course, is the test suite is designed to be runnable fairly quickly. Compiling some other large project and running its test suite can make the autotester much less useful when the turnaround time increases.) Putting large projects into the autotester has the implication that development and support of those projects has been ceded to the core dev team, i.e. who is responsible for it has been badly blurred.
Aug 23 2013
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Aug 23, 2013 at 11:07:35AM -0700, Walter Bright wrote:
 On 8/23/2013 10:34 AM, Denis Shelomovskij wrote:
By the way, the ability to add costume projects to D autotester is
already proposed without any response:
http://forum.dlang.org/thread/kqm4ta$m7f$1 digitalmars.com
The question comes up repeatedly, and I've answered it repeatedly, the latest on 8/20 in the thread "std.serialization: pre-voting review / discussion". Here's the message: --------------------------------- On 8/18/2013 9:33 AM, David Nadlinger wrote:
 Having a system that regularly, automatically runs the test suites
 of several larger, well-known D projects with the results being
 readily available to the DMD/druntime/Phobos teams would certainly
 help. But it's also not ideal, since if a project starts to fail,
 the exact nature of the issue (regression in DMD or bug in the
 project, and if the former, a minimal test case) can often be hard
 to track down for somebody not already familiar with the code base.
That's exactly the problem. If these large projects are incorporated into the autotester, who is going to isolate/fix problems arising with them? The test suite is designed to be a collection of already-isolated issues, so understanding what went wrong shouldn't be too difficult. Note that already it is noticeably much harder to debug a phobos unit test gone awry than the other tests. A full blown project that nobody understands would fare far worse. (And the other problem, of course, is the test suite is designed to be runnable fairly quickly. Compiling some other large project and running its test suite can make the autotester much less useful when the turnaround time increases.) Putting large projects into the autotester has the implication that development and support of those projects has been ceded to the core dev team, i.e. who is responsible for it has been badly blurred.
One idea that occurred to me is to put large external projects under a separate tester, not bound to the core dmd/druntime/phobos autotesting, but an independent tester that regularly checks out git HEAD and compiles & tests said large projects. The devs can then monitor the status of these tests independently, and when something goes wrong, they can ping whoever is responsible for that project to investigate what might be the cause. If it's caused by the latest git commit(s), they can file regression bugs on the bugtracker, otherwise, they update their code to work with the new compiler. If the responsible person doesn't respond, or there is no contact person, we could use this forum as a kind of community notice that something needs to be fixed somewhere. If nobody responds, the project is likely not worth keeping up with. This way we don't slow down development / autotesting unnecessarily, and still let the community know when there might be potential problems with existing code. (It probably also helps for the core devs to be aware of potential regressions without being held back from code changes.) If it's important, *somebody* will step up and file bugs and/or fix the issue. If nobody cares, then probably it's not worth fretting over. T -- Do not reason with the unreasonable; you lose by definition.
Aug 23 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/23/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 The devs can then monitor the
 status of these tests independently, and when something goes wrong, they
 can ping whoever is responsible for that project to investigate what
 might be the cause.
That's basically the same thing I said in that other thread. Considering that we have a very close D community, this sort of workflow could work.
Aug 23 2013
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Friday, 23 August 2013 at 20:13:21 UTC, Andrej Mitrovic wrote:
 On 8/23/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 The devs can then monitor the
 status of these tests independently, and when something goes 
 wrong, they
 can ping whoever is responsible for that project to 
 investigate what
 might be the cause.
That's basically the same thing I said in that other thread. Considering that we have a very close D community, this sort of workflow could work.
Yep. I don't think anybody is suggesting to add external projects to the main CI system – at least I certainly wouldn't, being part of both the compiler and library writer camps. But it could still be useful to have an automated "real world health check" that is available publicly, maybe even integrated with the official D package repository, if we are going to have one anytime soon (btw, did I miss any "official" announcement regarding code.dlang.org/dub?). This is not about shifting the responsibility for maintaining the library to the core team, it's about empowering people working on the frontend/druntime/Phobos to know when they are breaking real-world code. I think it's pretty clear at this point that the regression test suite is not anywhere near exhaustive for even the specified parts of the language, and then there are still the many holes in the spec to consider as well. Maybe we can pull this off with an improved beta process where all D users are actually persuaded to participate. But seeing that Walter seems to avoid incorporating any of the ideas in that direction that have been brought up on dmd-beta previously (more public announcements, at least a rough tentative release schedule, meaningful file names/version info, …), I'm not sure what the plan is there. David
Aug 23 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Aug 24, 2013 at 01:18:14AM +0200, David Nadlinger wrote:
[...]
 Maybe we can pull this off with an improved beta process where all D
 users are actually persuaded to participate. But seeing that Walter
 seems to avoid incorporating any of the ideas in that direction that
 have been brought up on dmd-beta previously (more public
 announcements, at least a rough tentative release schedule, meaningful
 file names/version info, …), I'm not sure what the plan is there.
[...] It could be as simple as announcing the availability of a beta release on the main D forum / newsgroup (i.e. here). I didn't even know dmd-beta existed until more than a year after I joined the D community. And now that I'm subscribed, I hardly ever hear anything from it. Posting the beta announcement here instead would, at the very least, reach a far wider audience. (Or maybe post in both places. Wouldn't hurt. The whole point of a beta *release* is for the world to know about it so that it can try things out before the actual release. No point hiding the announcement in some obscure isolated corner.) T -- Spaghetti code may be tangly, but lasagna code is just cheesy.
Aug 23 2013
prev sibling next sibling parent reply David <d dav1d.de> writes:
Am 23.08.2013 20:07, schrieb Walter Bright:
 On 8/23/2013 10:34 AM, Denis Shelomovskij wrote:
 By the way, the ability to add costume projects to D autotester is
 already
 proposed without any response:
 http://forum.dlang.org/thread/kqm4ta$m7f$1 digitalmars.com
The question comes up repeatedly, and I've answered it repeatedly, the latest on 8/20 in the thread "std.serialization: pre-voting review / discussion". Here's the message: --------------------------------- On 8/18/2013 9:33 AM, David Nadlinger wrote:
 Having a system that regularly, automatically runs the test suites of
several
 larger, well-known D projects with the results being readily available
to the
 DMD/druntime/Phobos teams would certainly help. But it's also not
ideal, since
 if a project starts to fail, the exact nature of the issue (regression
in DMD or
 bug in the project, and if the former, a minimal test case) can often
be hard to
 track down for somebody not already familiar with the code base.
That's exactly the problem. If these large projects are incorporated into the autotester, who is going to isolate/fix problems arising with them? The test suite is designed to be a collection of already-isolated issues, so understanding what went wrong shouldn't be too difficult. Note that already it is noticeably much harder to debug a phobos unit test gone awry than the other tests. A full blown project that nobody understands would fare far worse. (And the other problem, of course, is the test suite is designed to be runnable fairly quickly. Compiling some other large project and running its test suite can make the autotester much less useful when the turnaround time increases.) Putting large projects into the autotester has the implication that development and support of those projects has been ceded to the core dev team, i.e. who is responsible for it has been badly blurred.
I find it funny how hard you try to get D "production ready" and make (in my opinion) bad decisions affecting the future of D, but I hit every release at least 3 regressions, 1 of these is mostly a real WTF. At least run this test-suite consisting of 3rd party projects every now and then and especially before a release. I personally don't mind these regressions (well I do, but I can live with them), trying to find the root for a day or two and fix it there, but companies will. While we're at it, the D release-zip should also be checked, not beeing able to install a new release because the zip is fucked up doesn't speak for D (especially when it takes forever to download dmd for windows and you find a shitton of binaries for other OSes as well) </rant>
Aug 23 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/23/2013 3:11 PM, David wrote:
 I find it funny how hard you try to get D "production ready" and make
 (in my opinion) bad decisions affecting the future of D, but I hit every
 release at least 3 regressions, 1 of these is mostly a real WTF.
Please join us in the beta test program, then. The point of it is so that users can compile their projects and find problems before we do a release.
Aug 23 2013
prev sibling next sibling parent Val Markovic <val markovic.io> writes:
On Fri, Aug 23, 2013 at 11:07 AM, Walter Bright
<newshound2 digitalmars.com>wrote:

 That's exactly the problem. If these large projects are incorporated into
 the autotester, who is going to isolate/fix problems arising with them?

 The test suite is designed to be a collection of already-isolated issues,
 so understanding what went wrong shouldn't be too difficult. Note that
 already it is noticeably much harder to debug a phobos unit test gone awry
 than the other tests. A full blown project that nobody understands would
 fare far worse.
AFAIR both Clang and GCC have entire third-party projects in their test suite. I know that at least SQLite is part of both, and that's a pretty big project. If I recall correctly, GCC releases are blocked on successfully compiling the Linux kernel, all of Firefox and I possibly Qt. The third-party project tests need to finish without failures as well. My recollection is a bit vague here though. Now, do they compile and run all the tests for these projects on every commit or make sure nothing has broken just before making a new release, I don't know. But I do know that it's the latter at the very least.
Aug 23 2013
prev sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 23/08/13 20:38, H. S. Teoh wrote:
 One idea that occurred to me is to put large external projects under a
 separate tester, not bound to the core dmd/druntime/phobos autotesting,
 but an independent tester that regularly checks out git HEAD and
 compiles & tests said large projects.
I proposed something along these lines shortly after DConf: http://forum.dlang.org/thread/mailman.47.1369319426.13711.digitalmars-d puremagic.com I thought it could be useful both as a stability tester _and_ as a means to evaluate the prospective impact of deliberately breaking changes. It was quite well received as an idea but is probably a big job to take on ... :-(
Aug 24 2013
parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 24 August 2013 at 07:50:00 UTC, Joseph Rushton 
Wakeling wrote:
 I proposed something along these lines shortly after DConf:
 http://forum.dlang.org/thread/mailman.47.1369319426.13711.digitalmars-d puremagic.com

 I thought it could be useful both as a stability tester _and_ 
 as a means to evaluate the prospective impact of deliberately 
 breaking changes.

 It was quite well received as an idea but is probably a big job 
 to take on ... :-(
I do want to contribute one once I decide what I want to do about a more powerful server (such suite is a bit too hard for my small VPS). Pretty sure something can be done this year, but will take some time (months+).
Aug 25 2013
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/08/13 21:57, Dicebot wrote:
 I do want to contribute one once I decide what I want to do about a more
 powerful server (such suite is a bit too hard for my small VPS). Pretty sure
 something can be done this year, but will take some time (months+).
Is this something where we can do some crowdfunding? Might be worth trying to invest in some server/cloud infrastructure for D projects like this that have collective benefit.
Aug 25 2013
parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 25 August 2013 at 20:02:49 UTC, Joseph Rushton 
Wakeling wrote:
 On 25/08/13 21:57, Dicebot wrote:
 I do want to contribute one once I decide what I want to do 
 about a more
 powerful server (such suite is a bit too hard for my small 
 VPS). Pretty sure
 something can be done this year, but will take some time 
 (months+).
Is this something where we can do some crowdfunding? Might be worth trying to invest in some server/cloud infrastructure for D projects like this that have collective benefit.
Well, I wanted to do the move anyway for my own needs so it is just a planned side effect. It is not that costly, I simply don't have time to do the move properly.
Aug 25 2013