digitalmars.D.announce - Autotesting dub packages with dmd nightly
- Sebastiaan Koppe (7/7) Jul 16 2016 Just to let you guys know - and to be sure no one is doing the
- rikki cattermole (4/11) Jul 16 2016 If you add nightly can you add x last major releases?
- Sebastiaan Koppe (5/9) Jul 16 2016 You mean like a badge? That is possible. Of course dub shows all
- rikki cattermole (5/13) Jul 16 2016 Yeah badges. With caching that shouldn't be too much of an issue.
- Basile B. (6/13) Jul 16 2016 I think that everybody will agree that's an excellent ideas to
- Sebastiaan Koppe (3/8) Jul 16 2016 Besides installing often used dependencies in the build image, I
- Basile B. (4/12) Jul 17 2016 a new DUB property (in the package description) could solve this.
- Jack Stouffer (4/11) Jul 16 2016 Perhaps this code could also be used to find dub packages which
- Guillaume Piolat (3/10) Jul 17 2016 That would be really really great, especially if the unittests
- Jacob Carlborg (10/17) Jul 17 2016 Why not using something existing, like GitLab? Although GitLab is a
- Sebastiaan Koppe (4/13) Jul 17 2016 I don't have a good answer for this question. It might very well
- qznc (17/24) Jul 18 2016 Great! Maybe I can help you? Do you have a repository somewhere
- Sebastiaan Koppe (13/26) Jul 18 2016 Not yet. Let me first do some groundwork. It could take month
- qznc (2/5) Jul 18 2016 Hey, me too. Slow and steady wins the race. ;)
- Jacob Carlborg (4/6) Jul 18 2016 In that case, go with something that already exists.
- Edwin van Leeuwen (4/9) Jul 18 2016 I think Martin Nowak has some sort of automated setup for testing
- Jacob Carlborg (7/13) Jul 18 2016 No, it's not built for this but I don't see a reason why it wouldn't
- Jacob Carlborg (53/60) Jul 18 2016 Just as a test I setup a project on GitLab.com [1]. This is an example
- vladdeSV (2/9) Jul 19 2016 Nice :)
- Sebastiaan Koppe (44/44) Aug 06 2016 I have just finished a first iteration of dubster, a test runner
- Seb (28/36) Aug 06 2016 That are excellent news!
- Sebastiaan Koppe (24/42) Aug 07 2016 I was thinking about having people register for notifications
- Basile B. (27/30) Aug 06 2016 No endpoint but still possible in two steps. For example test
- Sebastiaan Koppe (6/15) Aug 07 2016 Yeah, I considered something like that myself. I find scraping to
- Seb (5/8) Aug 06 2016 Why don't you make a PR to the dub registry
- Martin Nowak (22/24) Aug 07 2016 I actually don't think this makes sense. You're not in the
- Sebastiaan Koppe (19/40) Aug 08 2016 Thanks for taking the time to respond.
- Martin Nowak (15/46) Aug 10 2016 We want better ranking of dub packages (mostly by download, but for sure
- Sebastiaan Koppe (24/33) Aug 10 2016 I was also thinking about integrating results from CI builds that
- Seb (19/22) Aug 10 2016 Thinking about it, you could also opt for integrating it with the
- Sebastiaan Koppe (19/20) Aug 22 2016 I finally got around implementing running dmd/druntime/phobos
- Seb (32/54) Aug 26 2016 That's awesome to know!
- Sebastiaan Koppe (21/50) Aug 27 2016 Not at all. Just need an api key from someone with administration
Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.
Jul 16 2016
On 17/07/2016 8:34 AM, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.If you add nightly can you add x last major releases? Also how about adding a 'button' for each one that says weather it passed or not and for which version of dmd.
Jul 16 2016
On Sunday, 17 July 2016 at 04:28:54 UTC, rikki cattermole wrote:On 17/07/2016 8:34 AM, Sebastiaan Koppe wrote: If you add nightly can you add x last major releases?Yeah, specially for dub, nightly is not that important.Also how about adding a 'button' for each one that says weather it passed or not and for which version of dmd.You mean like a badge? That is possible. Of course dub shows all packages at once, so would have to coordinate to avoid a flood of requests.
Jul 16 2016
On 17/07/2016 6:15 PM, Sebastiaan Koppe wrote:On Sunday, 17 July 2016 at 04:28:54 UTC, rikki cattermole wrote:Yeah badges. With caching that shouldn't be too much of an issue. If you use redirection to the actual badge, it shouldn't eat too much of bandwidth or cpu time. After all, the badge should be the same for dmd version + pass/fail.On 17/07/2016 8:34 AM, Sebastiaan Koppe wrote: If you add nightly can you add x last major releases?Yeah, specially for dub, nightly is not that important.Also how about adding a 'button' for each one that says weather it passed or not and for which version of dmd.You mean like a badge? That is possible. Of course dub shows all packages at once, so would have to coordinate to avoid a flood of requests.
Jul 16 2016
On Saturday, 16 July 2016 at 20:34:49 UTC, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.I think that everybody will agree that's an excellent ideas to discover regressions. How do you plan to handle libraries that are not purely written in D (i.e requiring -L-lClib linker option) ? There are probably other cases where a build failure won't be significant.
Jul 16 2016
On Sunday, 17 July 2016 at 04:47:40 UTC, Basile B. wrote:I think that everybody will agree that's an excellent ideas to discover regressions. How do you plan to handle libraries that are not purely written in D (i.e requiring -L-lClib linker option) ? There are probably other cases where a build failure won't be significant.Besides installing often used dependencies in the build image, I don't know. Lets see how many there are and go from there.
Jul 16 2016
On Sunday, 17 July 2016 at 06:19:16 UTC, Sebastiaan Koppe wrote:On Sunday, 17 July 2016 at 04:47:40 UTC, Basile B. wrote:a new DUB property (in the package description) could solve this. This highly anticipated but in case your project becomes somewhat "official" there is this solution.I think that everybody will agree that's an excellent ideas to discover regressions. How do you plan to handle libraries that are not purely written in D (i.e requiring -L-lClib linker option) ? There are probably other cases where a build failure won't be significant.Besides installing often used dependencies in the build image, I don't know. Lets see how many there are and go from there.
Jul 17 2016
On Saturday, 16 July 2016 at 20:34:49 UTC, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.Perhaps this code could also be used to find dub packages which are not currently compiling and mark them on code.dlang.org? Which is a feature which people have been asking for for a while.
Jul 16 2016
On Saturday, 16 July 2016 at 20:34:49 UTC, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.That would be really really great, especially if the unittests are also run in "release" build type.
Jul 17 2016
On 2016-07-16 22:34, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.Why not using something existing, like GitLab? Although GitLab is a source code hosting system its CI is excellent. It uses a master-worker architecture as well, GitLab being the master and one or more runners. Runners are available for all major operating systems: macOS, Linux and Windows. For Linux a Docker runner is available. This way it could run tests on multiple platforms. The installation is straight forward with native packages. -- /Jacob Carlborg
Jul 17 2016
On Sunday, 17 July 2016 at 13:17:45 UTC, Jacob Carlborg wrote:On 2016-07-16 22:34, Sebastiaan Koppe wrote: Why not using something existing, like GitLab? Although GitLab is a source code hosting system its CI is excellent. It uses a master-worker architecture as well, GitLab being the master and one or more runners. Runners are available for all major operating systems: macOS, Linux and Windows. For Linux a Docker runner is available. This way it could run tests on multiple platforms. The installation is straight forward with native packages.I don't have a good answer for this question. It might very well be the case that going with Gitlab (or similar) would be the better option.
Jul 17 2016
On Saturday, 16 July 2016 at 20:34:49 UTC, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.Great! Maybe I can help you? Do you have a repository somewhere already? I don't think nightlies are that important. Older releases, alpha/beta versions, LDC, and GDC seem more important for me. For example, I would like to know if a dub package which was last updated two years ago is still working with the current dmd. The hardest part is probably the work distribution. It should work across platforms, so we can (eventually) test Windows, Android, Raspberry Pi, etc. I don't believe GitLab would be a good idea. It is not built for this and I find the CI parts quite minimal. I have some buildbot experience, it would fit, but is a bitch to maintain. Maybe v0.9 is better when it is finished, but development happened at glacial speed in the last years (!). If we build something custom, the question is: Dogfooding or not? With (e.g.) Python we would have something working much quicker.
Jul 18 2016
On Monday, 18 July 2016 at 07:22:07 UTC, qznc wrote:Great! Maybe I can help you? Do you have a repository somewhere already?Not yet. Let me first do some groundwork. It could take month though.I don't think nightlies are that important. Older releases, alpha/beta versions, LDC, and GDC seem more important for me. For example, I would like to know if a dub package which was last updated two years ago is still working with the current dmd.Once infrastructure is in place everything is just a git commit hash. Well, kind of.The hardest part is probably the work distribution. It should work across platforms, so we can (eventually) test Windows, Android, Raspberry Pi, etc.Like I said I am aiming really low. On purpose. I have a wife and two kids and I need to keep the scope limited. The first step is writing something that works in under 1kloc in 80% of the cases. If it would catch one regression per week/month before they end up in release I would be quite happy. All the fancy stuff comes after that.If we build something custom, the question is: Dogfooding or not? With (e.g.) Python we would have something working much quicker.It should be written in D, that way everybody is a potential contributor.
Jul 18 2016
On Monday, 18 July 2016 at 09:55:04 UTC, Sebastiaan Koppe wrote:Not yet. Let me first do some groundwork. It could take month though. I have a wife and two kids and I need to keep the scope limited.Hey, me too. Slow and steady wins the race. ;)
Jul 18 2016
On 2016-07-18 11:55, Sebastiaan Koppe wrote:Like I said I am aiming really low. On purpose. I have a wife and two kids and I need to keep the scope limited.In that case, go with something that already exists. -- /Jacob Carlborg
Jul 18 2016
On Monday, 18 July 2016 at 18:47:28 UTC, Jacob Carlborg wrote:On 2016-07-18 11:55, Sebastiaan Koppe wrote:I think Martin Nowak has some sort of automated setup for testing a limited number of dub packages against each release, but I can't find the relevant post at the moment.Like I said I am aiming really low. On purpose. I have a wife and two kids and I need to keep the scope limited.In that case, go with something that already exists.
Jul 18 2016
On 2016-07-18 09:22, qznc wrote:The hardest part is probably the work distribution. It should work across platforms, so we can (eventually) test Windows, Android, Raspberry Pi, etc.GitLab can handle this really easy.I don't believe GitLab would be a good idea. It is not built for this and I find the CI parts quite minimal.No, it's not built for this but I don't see a reason why it wouldn't work. I use GitLab extensively at work and it works great.I have some buildbot experience, it would fit, but is a bitch to maintain.GitLab is not :) -- /Jacob Carlborg
Jul 18 2016
On 2016-07-16 22:34, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.Just as a test I setup a project on GitLab.com [1]. This is an example of a build [2]. The config for building that looks like: linux: script: - curl -L -o dvm https://github.com/jacob-carlborg/dvm/releases/download/v0.4.4/dvm-0.4.4-linux-debian7-x86_64 - chmod +x dvm - ./dvm install dvm - source ~/.bashrc - dvm install 2.071.1 - dvm use 2.071.1 -d - curl -o dub.tar.gz 'http://code.dlang.org/files/dub-1.0.0-linux-x86_64.tar.gz' - tar -xzf dub.tar.gz - ./dub test "linux" is the name of the job and "script" is the commands that should be executed. The setup I was thinking about would work something like this: 1. Create one or more runners for each supported platform 2. Tag those in GitLab with the name of the platform 3. Create one job per platform that should be tested 4. Use the tags to specify which runner should be used, something like this [3]: linux_x86_64: tags: - linux_x86_64 script: - ./gitlab.sh windows: tag: - windows script: gitlab.bat osx: tag: - osx script: ./gitlab.sh 5. Update code.dlang.org to mirror the repository to GitLab when it finds an update in GitHub 6. Trigger a build using the GitLab API [4]: curl -X POST \ -F token=TOKEN \ -F ref=master \ https://gitlab.example.com/api/v3/projects/9/trigger/builds One issue is to get the .gitlab-ci.yml file in the repository. Since it's possible to host GitLab and the runners you get full control of everything. [1] https://gitlab.com/Carlborg/orange [2] https://gitlab.com/Carlborg/orange/builds/2450044 [3] https://gitlab.com/help/ci/yaml/README.md [4] https://gitlab.com/help/ci/triggers/README.md -- /Jacob Carlborg
Jul 18 2016
On Saturday, 16 July 2016 at 20:34:49 UTC, Sebastiaan Koppe wrote:Just to let you guys know - and to be sure no one is doing the same - I decided to go ahead and *start* writing an autotester that will fetch dmd nightly and unittest each dub package. It will be using a classic master-worker architecture and will leverage docker containers. I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.Nice :)
Jul 19 2016
I have just finished a first iteration of dubster, a test runner that runs `dub test` on each package for each dmd release. see https://github.com/skoppe/dubster Please provide feedback as it will determine the direction/life of this tester. I am planning on adding a web ui/api next to look around in the data. Today I gave it a spin and let it run on 488 packages on dub (about half). The component that runs `dub test` was done on a 2gb 2vcpu cloud instance. It compiled the packages with dmd 2.071.2-b2 and it took about 20 sec per package on average. 59 packages didn't build because of missing libraries. See the end for a total list of missing libaries. 13 packages caused dmd to use too much memory for the instance I was running it on. 112 packages had build errors, almost all of them with exitcode 1, no segfaults. 35 packages had unittests that retured non zero exitcodes (due to exceptions and failing assertions). 213 packages passed their unittests. The remaining 56 packages I couldn't categorize automatically so easily. Would have to take a deeper look at them. Some issues along the way: - code.dlang.org has an api but doesn't provide an endpoint to retrieve all packages/version. Now I just scrape the site instead (thanks Adam for your dom implementation). - Originally I was planning on running with nightlies, but the ones on the download section don't have a git commit hash associated with them. For now I just use digger to build the latest dmd releases on the worker nodes. - Some packages when running `dub test` didn't terminate on their own. - Linker errors (a lot of them windows) aclui, advapi32, asound, blas, bzip2, comctl32, comdlg32, ev, fcgi, fdb_c, fmod, ftgl, gccjit, gdi32, git2, GL, gsl, gslcblas, gumbo, imm32, iup, iupcontrols, jack, Judy, kernel32, lapack, lapacke, leveldb, libco, libshp, lz32, mad, miniupnpc, mpr, mrss, mysqlclient, nanomsg, nanovg, netapi32, Netapi32, netcdf, nlopt, ole32, oleacc, oleaut32, OpenCL, powrprof, pq, rasapi32, rdkafka, rpcns4, Rpcrt4, rpcrt4, sapnwrfc, scrypt, secur32, setupapi, shell32, shlwapi, snappy, sodium, tarsnap, tcc, tcl, tcmalloc, tk, udis86, usb, user32, version, vfw32, wayland, webp, winhttp, wininet, winmm, winspool, Ws2_32, wtsapi32, X11, xcb, xkbcommon, zlib, zmq, zookeeper_mt
Aug 06 2016
On Saturday, 6 August 2016 at 19:06:34 UTC, Sebastiaan Koppe wrote:I have just finished a first iteration of dubster, a test runner that runs `dub test` on each package for each dmd release. see https://github.com/skoppe/dubster Please provide feedback as it will determine the direction/life of this tester. I am planning on adding a web ui/api next to look around in the data.That are excellent news! Some random ideas: 1) Send the packages a notification about the build error (e.g. Github comment) - this should probably be tweaked a bit, s.t. it doesn't spam too often for still broken packages 2) Allow easy, manual build of special branches for the core team, e.g. let's say Walter develops the new scoped pointers feature ( https://github.com/dlang/DIPs/pull/24), than it would be great to know how many packages would break by pulling in the branch (in comparison to the last release or current nightly). A similar "breakage by shipping" test might be very interesting for critical changes to druntime or phobos too. 3) Once you have the API a) (try to) get a shield badge (-> http://shields.io/) b) Make the data available to the dub-registry (-> https://github.com/dlang/dub-registry) 4) Assess the quality of the unittests. Probably the easiest is `dub test -b unittest-cov`, and then summing up the total coverage of all generated .lst files, but running with coverage might increase your build times, though I would argue that it's worth it ;-) 5) Log your daily "broken" statistics - could be a good indicator of whether your hard work gets acknowledged. 6) Regarding linker errors - I can only redirect you to the open DUB issue (https://github.com/dlang/dub/issues/852) and DEP 5 (https://github.com/dlang/dub/wiki/DEP5).
Aug 06 2016
On Saturday, 6 August 2016 at 19:46:52 UTC, Seb wrote:That are excellent news!Thanks.1) Send the packages a notification about the build error (e.g. Github comment) - this should probably be tweaked a bit, s.t. it doesn't spam too often for still broken packagesI was thinking about having people register for notifications themselves.2) Allow easy, manual build of special branches for the core team.I need something similar for dev/testing purposes as well. Since I am using digger it is really easy to build whatever dmd + pull request is needed. Problem is controlling access.3) Once you have the API a) (try to) get a shield badge (-> http://shields.io/)Nice find. Will use.b) Make the data available to the dub-registry (-> https://github.com/dlang/dub-registry)Sure.4) Assess the quality of the unittests. Probably the easiest is `dub test -b unittest-cov`, and then summing up the total coverage of all generated .lst filesI am not sure this is a good idea. Besides the fact that coverage doesn't correlate with quality, it is outside of the purpose for this tool (identifying dmd regressions and identifying broken packages).5) Log your daily "broken" statistics - could be a good indicator of whether your hard work gets acknowledged.I rather hear it from people than seeing it in the stats :)6) Regarding linker errors - I can only redirect you to the open DUB issue (https://github.com/dlang/dub/issues/852) and DEP 5 (https://github.com/dlang/dub/wiki/DEP5).It is an open problem and I don't wont to solve it. For now I think I will just install the most important ones and just accept that not all packages will be build. On another note, I do think the dub package definition could use some extra fields. Like compatible platforms and compatible dmd versions. Take vibe.d for instance, it is specifically build for certain dmd versions and it makes no sense for dubster to try to compile it with an unsupported version. Also, it allows dub itself to notify you of incompatible packages w.r.t. the installed compiler. Same idea for platform.
Aug 07 2016
On Saturday, 6 August 2016 at 19:06:34 UTC, Sebastiaan Koppe wrote:- code.dlang.org has an api but doesn't provide an endpoint to retrieve all packages/version. Now I just scrape the site instead (thanks Adam for your dom implementation).No endpoint but still possible in two steps. For example test this script: °°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° import std.stdio; import std.net.curl; import std.json; import std.format; void main() { auto allRaw = get(`https://code.dlang.org/packages/index.json`); auto allJson = parseJSON(allRaw); enum latestFmtSpec = `https://code.dlang.org/api/packages/%s/latest`; foreach(p; allJson.array) { auto ver = get(latestFmtSpec.format(p.str)); writeln(p.str, " ", ver); } } °°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° To get the whole list with the json is faster than scrapping. To get the version will certainly be slower (even in paralell because of the get() for each package but it's cleaner since it uses the API.
Aug 06 2016
On Saturday, 6 August 2016 at 20:00:53 UTC, Basile B. wrote:On Saturday, 6 August 2016 at 19:06:34 UTC, Sebastiaan Koppe wrote:Yeah, I considered something like that myself. I find scraping to be better for both sides though, and simpler. A good api would certainly be better. On Saturday, 6 August 2016 at 20:08:47 UTC, Seb wrote:- code.dlang.org has an api but doesn't provide an endpoint to retrieve all packages/version. Now I just scrape the site instead (thanks Adam for your dom implementation).No endpoint but still possible in two steps.Why don't you make a PR to the dub registry (https://github.com/dlang/dub-registry) to get such an endpoint? Or at least open an issue ;-)https://github.com/dlang/dub-registry/issues/171
Aug 07 2016
On Saturday, 6 August 2016 at 19:06:34 UTC, Sebastiaan Koppe wrote:- code.dlang.org has an api but doesn't provide an endpoint to retrieve all packages/version. Now I just scrape the site instead (thanks Adam for your dom implementation).Why don't you make a PR to the dub registry (https://github.com/dlang/dub-registry) to get such an endpoint? Or at least open an issue ;-)
Aug 06 2016
On Saturday, 16 July 2016 at 20:34:49 UTC, Sebastiaan Koppe wrote:I am aiming really low at first, but will eventually add things like memory usage, history, notifications, etc.I actually don't think this makes sense. You're not in the position to maintain 1K+ packages, it's the library owners that need to test their code. Just this short list I'm using for the project tester is hardly maintainable. https://github.com/MartinNowak/project_tester (uses Jenkins, no need to write yet another CI). I've already thought about many different aspects of this and here are the 2 things that are useful and might work out. - Implement a tester that runs for every PR (just the other testers) and tests the most popular/important dub packages. Once a day is not enough b/c will feel responsible for breakages, we really need feedback before merging. - Show test results of various CIs on code.dlang.org. Testing a dub package on Travis-CI is already a no-brainer. For example the following .travis.yml would test a package against all dmd release channels. ```yaml language: d d: [dmd, dmd-beta, dmd-nightly] ```
Aug 07 2016
On Sunday, 7 August 2016 at 23:08:34 UTC, Martin Nowak wrote:I actually don't think this makes sense. You're not in the position to maintain 1K+ packages, it's the library owners that need to test their code.Thanks for taking the time to respond. I agree with you. Library owners should test their code themselves. But they don't. 24% of the packages don't build.Just this short list I'm using for the project tester is hardly maintainable.I don't need to maintain anything besides linker errors. It is quite simple, I just run `dub test` and see what happens. If that doesn't work I consider it a failed build.https://github.com/MartinNowak/project_tester (uses Jenkins, no need to write yet another CI).I would argue mine is simpler to deploy and have nodes join.I've already thought about many different aspects of this and here are the 2 things that are useful and might work out. - Implement a tester that runs for every PR (just the other testers) and tests the most popular/important dub packages. Once a day is not enough b/c will feel responsible for breakages, we really need feedback before merging.It is just a matter of resources. I choose nightly since it seemed doable using just my own resources.- Show test results of various CIs on code.dlang.org. Testing a dub package on Travis-CI is already a no-brainer. For example the following .travis.yml would test a package against all dmd release channels. ```yaml language: d d: [dmd, dmd-beta, dmd-nightly] ```Yes, that is quite nice. But that only gets triggered when the repo is updated. All in all I understand your reservations, and I highly appreciate your feedback. I understand I won't bring the end-all solution to testing, but I do hope to reach the goals that I have set forth for myself. 1) catching (some) regressions, 2) giving insights into bit rot on code.dlang.org, 3) have fun. It might take a couple of months before I reach them, or I might not at all.
Aug 08 2016
On 08/08/2016 09:54 AM, Sebastiaan Koppe wrote:On Sunday, 7 August 2016 at 23:08:34 UTC, Martin Nowak wrote:You're welcome. This is an important topic for us.I actually don't think this makes sense. You're not in the position to maintain 1K+ packages, it's the library owners that need to test their code.Thanks for taking the time to respond.I agree with you. Library owners should test their code themselves. But they don't. 24% of the packages don't build.We want better ranking of dub packages (mostly by download, but for sure also showing CI results [¹]). It's rather trivial to filter out low-quality packages b/c they're hardly used. [¹]: https://trello.com/c/CaYJwtBV/60-integrate-ci-results-with-dub-registryIs it already usable? How to deploy then? I need to test https://github.com/dlang/druntime/pull/1602 and otherwise have to resetup my project tester for that. Adding more servers to Jenkins is trivial as well.Just this short list I'm using for the project tester is hardly maintainable.I don't need to maintain anything besides linker errors. It is quite simple, I just run `dub test` and see what happens. If that doesn't work I consider it a failed build.https://github.com/MartinNowak/project_tester (uses Jenkins, no need to write yet another CI).I would argue mine is simpler to deploy and have nodes join.Yes, but from past experience we know that people don't look at results, if you don't make it part of PR acceptance.Once a day is not enough b/c will feel responsible for breakages, we really need feedback before merging.It is just a matter of resources. I choose nightly since it seemed doable using just my own resources.Travis now allows cron scheduling, you still have to ask their support to unlock that. -Martin- Show test results of various CIs on code.dlang.org. Testing a dub package on Travis-CI is already a no-brainer. For example the following .travis.yml would test a package against all dmd release channels. ```yaml language: d d: [dmd, dmd-beta, dmd-nightly] ```Yes, that is quite nice. But that only gets triggered when the repo is updated.
Aug 10 2016
On Wednesday, 10 August 2016 at 10:32:24 UTC, Martin Nowak wrote:We want better ranking of dub packages (mostly by download, but for sure also showing CI results [¹]).I was also thinking about integrating results from CI builds that packages do themselves. But there is some 'impedance mismatch': those CI build are done on the master branch, not on the latest release that is on code.dlang.org.Is it already usable?Short answer: No. I am currently test running it on all packages against the 10 latest dmd releases (I have done 6k packages on and off since 2 days ago). But I am running into vibe.d issues/missing features. Things like not being able to use gzip with requestHttp (let alone with a RestInterfaceClient), invalid internal state with the http client pool on interrupting requests, and some other things. Also, I am writing a PR for vibe.d to send http request to unix sockets.How to deploy then?For the worker it's just a docker container. But until the unix sockets PR is done you do have to setup the docker daemon to listen on the docker0 interface.I need to test https://github.com/dlang/druntime/pull/1602 and otherwise have to resetup my project tester for that.I am using digger to build dmd, so adding in the pull request is trivial. I do need to adjust internals to properly handle it though. But alas, family is coming over so don't expect anything anytime soon.Yes, but from past experience we know that people don't look at results, if you don't make it part of PR acceptance.So true. Then I will do PR's first.
Aug 10 2016
On Wednesday, 10 August 2016 at 18:35:03 UTC, Sebastiaan Koppe wrote:Thinking about it, you could also opt for integrating it with the dmd PR flow - in a similar manner to the autotester or coverage bot: Select a subset (depending on the runtime) of packages and run your dub autotester for every commit and thus save for every commit a list of passing packages. Now for a new PR, search for the master commit hash in your DB of runs and run the dub autotester with those packages. The workflow of the AutoTester [1] is a bit more complicated, because it is throwing away results as soon as the master HEAD changes (to avoid any inconsistencies) and there are often rebases and additional pushes happening, but you could just opt for a simple 80% solution. I imagine shouting at Walter with a Github comment "Hey this PR will break 10% of all packages [of the subset]" could be quite helpful ;-) [1] https://auto-tester.puremagic.comYes, but from past experience we know that people don't look at results, if you don't make it part of PR acceptance.So true. Then I will do PR's first.
Aug 10 2016
On Wednesday, 10 August 2016 at 18:35:03 UTC, Sebastiaan Koppe wrote:So true. Then I will do PR's first.I finally got around implementing running dmd/druntime/phobos pull requests against all dub packages. Thank you digger, for making it so easy. with a batch from 2.071.2-b2. 108 packages had a different build result. I have no nice stats or pictures, but a quick glance over the raw data: 50 of them went from green unittests to a dmd exit code 1. 16 went from unknown build results to dmd exit code 1. 10 went from dmd exit code 255 to 1 9 of them are now green. 8 of them went from linker errors to a dmd exit code 1. 6 of then went from non-zero exit code during unittest run to a dmd exit code 1. 3 previous ran out of memory but now resulted in dmd exit code 1. etc. All in all I think +/- 96 package are affected. A little over 11%.
Aug 22 2016
On Monday, 22 August 2016 at 20:44:05 UTC, Sebastiaan Koppe wrote:On Wednesday, 10 August 2016 at 18:35:03 UTC, Sebastiaan Koppe wrote:That's awesome to know! How difficult would it be to integrate it with the dlang GitHub PR workflow? I am just shooting an idea that popped into my head: We already use CircleCi and Travis for the dlang repos, so if we lock the packages to a fixed version (to prevent failures caused by the package authors), we might be able create a simple file like: ``` vibe.d==0.7.29 mir==0.16.3 ... ``` We could select a subset (e.g. 50-100), s.t. the runtime doesn't get exorbitant. Following we could then do enable the checking in CircleCi with sth. similar to: ``` wget https://raw.githubusercontent.com/dlang/community-list/master/d GitHub s.t. editing it is easy dub fetch your-fancy-tool --version="x.y.y" dub run your-fancy-tool --config dlang-stable.packages ``` Of course CircleCi doesn't have the access rights to post back to the hook API, but you could send a notification to dlang-bot [1] which has the permissions or let the CI error / fail. Otherwise you could of course look into setting up your own job queue (or hack with the code from the auto-tester [2]), which might be fun too. [1] https://github.com/MartinNowak/dlang-bot [2] https://github.com/braddr/d-testerSo true. Then I will do PR's first.I finally got around implementing running dmd/druntime/phobos pull requests against all dub packages. Thank you digger, for making it so easy. with a batch from 2.071.2-b2. 108 packages had a different build result. I have no nice stats or pictures, but a quick glance over the raw data: 50 of them went from green unittests to a dmd exit code 1. 16 went from unknown build results to dmd exit code 1. 10 went from dmd exit code 255 to 1 9 of them are now green. 8 of them went from linker errors to a dmd exit code 1. 6 of then went from non-zero exit code during unittest run to a dmd exit code 1. 3 previous ran out of memory but now resulted in dmd exit code 1. etc. All in all I think +/- 96 package are affected. A little over 11%.
Aug 26 2016
On Friday, 26 August 2016 at 18:52:17 UTC, Seb wrote:That's awesome to know! How difficult would it be to integrate it with the dlang GitHub PR workflow?Not at all. Just need an api key from someone with administration access. But let's not get ahead of ourselves. Right now I am just planning to contact the github api. Still, there is some work to be done first for purging and updating the job queue when prs are updated and probably some other cases. Also there are some choices left regarding the interpretation of the results. Right now, for pull requests I do a diff with the latest dmd release and collect all the packages that have a different outcome. It would be better to run the comparison against the commit the pull request was based on, although that requires building twice as much. Currently I am focused on a simple frontend to give people a view into the results. It is coming along quite nicely.I am just shooting an idea that popped into my head: We already use CircleCi and Travis for the dlang repos, so if we lock the packages to a fixed version (to prevent failures caused by the package authors), we might be able create a simple file like: ``` vibe.d==0.7.29 mir==0.16.3 ... ``` We could select a subset (e.g. 50-100), s.t. the runtime doesn't get exorbitant. Following we could then do enable the checking in CircleCi with sth. similar to: ``` wget https://raw.githubusercontent.com/dlang/community-list/master/d GitHub s.t. editing it is easy dub fetch your-fancy-tool --version="x.y.y" dub run your-fancy-tool --config dlang-stable.packages ``` Of course CircleCi doesn't have the access rights to post back to the hook API, but you could send a notification to dlang-bot [1] which has the permissions or let the CI error / fail. Otherwise you could of course look into setting up your own job queue (or hack with the code from the auto-tester [2]), which might be fun.I already have my own queue. The important part though is a place to keep the results and running queries against it. Currently I use various regexes against dub test's output and categorise accordingly. What I am really happy about is the aggressive caching. It allows me to build and unit tests a package in 10sec on average.
Aug 27 2016