www.digitalmars.com         C & C++   DMDScript  

D.gnu - Set-up Buildbot for GDC

reply Iain Buclaw <ibuclaw gdcproject.org> writes:
Hi,

After some lamenting I've finally gotten round to setting up a 
buildbot for gdc.

https://buildbot.dgnu.org

Fitted with a smorgasbord of cross compiler builds, all of them 
are broken in one way or another, given the definition of "not 
broken" being:

  - Can build a usable compiler: all should be green here.
  - Can build druntime and libphobos: I think only ARM holds this 
title.
  - Passes the testsuite with zero failures: There exist tests 
that ICE on certain targets.

It it my hope this encourages fixing broken targets, and flags up 
regressions in build immediately, not months down the line later.

Build configuration is available on github.  Using docker so that 
anyone can set up and debug failing builds locally.

https://github.com/D-Programming-GDC/buildbot-gdc


Regards,
Iain.
Jul 02 2017
parent reply Johannes Pfau <nospam example.com> writes:
Am Sun, 02 Jul 2017 19:53:05 +0000
schrieb Iain Buclaw <ibuclaw gdcproject.org>:

 Hi,
 
 After some lamenting I've finally gotten round to setting up a 
 buildbot for gdc.
 
 https://buildbot.dgnu.org
 
 Fitted with a smorgasbord of cross compiler builds, all of them 
 are broken in one way or another, given the definition of "not 
 broken" being:
 
   - Can build a usable compiler: all should be green here.
   - Can build druntime and libphobos: I think only ARM holds this 
 title.
   - Passes the testsuite with zero failures: There exist tests 
 that ICE on certain targets.
 
 It it my hope this encourages fixing broken targets, and flags up 
 regressions in build immediately, not months down the line later.
 
 Build configuration is available on github.  Using docker so that 
 anyone can set up and debug failing builds locally.
 
 https://github.com/D-Programming-GDC/buildbot-gdc
 
 
 Regards,
 Iain.
Nice. I had a similar idea but only managed to read through the first 120 pages of the buildbot manual last weekend ;-) Buildbot sounds really interesting as it seems to be highly customizable. The most complicated but also useful part seems to somehow use multiple workers for one build step. This is required for advanced cross compiler testing, though in a simple setup we can just dedicate one ARM machine to a X86 machine and simply keep the ARM machine out of the buildbot configuration. The X86 can then use standard dejagnu+ssh to connect to the ARM machine. Another interesting thing is support of libvirt, so we should be able to test in virtualized QEMU VMs. For some architectures hardware for testing seems hard to come by (ARM and MIPS is easy, but SH4, PPC, ...). One thin buildbot doesn't seem to support out of the box is a docker latent worker hosted in a libvirt latent worker, but I guess this shouldn't be difficult to add. Regarding failing builds, seems like the new ARM arm-d.c file is just missing an import. AArch64, s390, hppa, ppc, sh4, mips and alpha are OK (as in fail in druntime). For gnuspe and sparc64 seems like the new config mechanism is broken. BTW: Do you know if there's any way to cluster builds by branch on the buildbot main page? I haven't gotten that far in the docs yet ;-) -- Johannes
Jul 04 2017
parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 4 July 2017 at 12:10, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Nice. I had a similar idea but only managed to read through the first
 120 pages of the buildbot manual last weekend ;-) Buildbot sounds
 really interesting as it seems to be highly customizable. The most
 complicated but also useful part seems to somehow use multiple workers
 for one build step. This is required for advanced cross compiler
 testing, though in a simple setup we can just dedicate one ARM machine
 to a X86 machine and simply keep the ARM machine out of the buildbot
 configuration. The X86 can then use standard dejagnu+ssh to connect to
 the ARM machine.
Yeah, I think just using dejagnu + remote testing would be the way to go. This can be part of the buildci.sh script (which should be merged with semaphoreci.sh). In any case, there should probably be a way to disable/skip runnable tests if host != target. https://gcc.gnu.org/onlinedocs/gccint/Directives.html#Directives { dg-do run [{ target $host_triplet }] } Probably not the wisest thing to do though.
 Another interesting thing is support of libvirt, so we should be able
 to test in virtualized QEMU VMs. For some architectures hardware for
 testing seems hard to come by (ARM and MIPS is easy, but SH4, PPC, ...).
 One thin buildbot doesn't seem to support out of the box is a docker
 latent worker hosted in a libvirt latent worker, but I guess this
 shouldn't be difficult to add.
How do these look? https://www.scaleway.com/armv8-cloud-servers/ They also have ARMv7's too, I'm currently paying around 18€ a month for the current linode box - 50G SSD, 4G memory, 2x cores. For the same price, could order the following kit for use as build workers. ARMv8: 50G SSD, 2G memory, 4x cores. (aarch64 native) ARMv7: 50G SSD, 2G memory, 4x cores. (arm native) x86_64: 100G SSD, 4G memory, 4x cores. (all cross compilers) Then move all hosted sites + database stuff to a cheap box. x86_64: 50G SSD, 2G memory, 2x cores. Should all come up to 14€ a month, unless there are hidden costs. :-) Could even add another cheap x86 box and have that do x86_64 native builds also - but semaphoreCI is doing well enough for that purpose so far...
 Regarding failing builds, seems like the new ARM arm-d.c file is just
 missing an import. AArch64, s390, hppa, ppc, sh4, mips and alpha are OK
 (as in fail in druntime). For gnuspe and sparc64 seems like the new
 config mechanism is broken.
Yep, as this is dockerized, anyone can build + debug locally. This the patches should be fixed asap. Also, if you look at the runtests (skipping over the unresolved bits), you'll see that a few of them ICE. I've noticed this for aarch64, maybe others do the same. This is another thing that should be investigated and fixed. In other words, I would like to get all of them green, but only have so much time. I better start by filtering out the noise first. :-)
 BTW: Do you know if there's any way to cluster builds by branch on the
 buildbot main page? I haven't gotten that far in the docs yet ;-)
Doesn't look like it. https://github.com/buildbot/buildbot/blob/0d44f0344ff82b707d02c75871df23c5f6b9cb8f/www/base/src/app/home/home.tpl.jade#L18-L24 Regards, Iain.
Jul 04 2017
parent reply Johannes Pfau <nospam example.com> writes:
Am Tue, 4 Jul 2017 20:42:52 +0200
schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

=20
 How do these look?
=20
 https://www.scaleway.com/armv8-cloud-servers/
=20
 They also have ARMv7's too, I'm currently paying around 18=E2=82=AC a mon=
th
 for the current linode box - 50G SSD, 4G memory, 2x cores.  For the
 same price, could order the following kit for use as build workers.
=20
 ARMv8: 50G SSD, 2G memory, 4x cores.  (aarch64 native)
 ARMv7: 50G SSD, 2G memory, 4x cores.  (arm native)
 x86_64: 100G SSD, 4G memory, 4x cores. (all cross compilers)
The ARMs look good. I'm not sure about the X86_64: The 'VPS' are 'shared' but I haven't found any information what exactly they share (shared cores?). Additionally some 2016 reports say the VPS use Intel Atom cores and only the more expensive 'Bare Metal' plans use Xeon Cores. I haven't found any official information on the homepage though so we might have to ask or just try and check available CPU resources / build speed.
=20
 Yep, as this is dockerized, anyone can build + debug locally.  This
 the patches should be fixed asap.  Also, if you look at the runtests
 (skipping over the unresolved bits), you'll see that a few of them
 ICE.  I've noticed this for aarch64, maybe others do the same.  This
 is another thing that should be investigated and fixed.
=20
 In other words, I would like to get all of them green, but only have
 so much time. I better start by filtering out the noise first. :-)
I'll try to setup a local builder for debugging later this week or next weekend and see if I can reduce some bugs.
 BTW: Do you know if there's any way to cluster builds by branch on
 the buildbot main page? I haven't gotten that far in the docs
 yet ;-)=20
=20 Doesn't look like it. =20 https://github.com/buildbot/buildbot/blob/0d44f0344ff82b707d02c75871df23c=
5f6b9cb8f/www/base/src/app/home/home.tpl.jade#L18-L24 OK, then this is something to look into (a lot) later. I guess buildbot should allow setting up custom sub pages so there's likely some way to implement a per-branch overview. -- Johannes
Jul 04 2017
next sibling parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 4 July 2017 at 23:05, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Tue, 4 Jul 2017 20:42:52 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 How do these look?

 https://www.scaleway.com/armv8-cloud-servers/

 They also have ARMv7's too, I'm currently paying around 18€ a month
 for the current linode box - 50G SSD, 4G memory, 2x cores.  For the
 same price, could order the following kit for use as build workers.

 ARMv8: 50G SSD, 2G memory, 4x cores.  (aarch64 native)
 ARMv7: 50G SSD, 2G memory, 4x cores.  (arm native)
 x86_64: 100G SSD, 4G memory, 4x cores. (all cross compilers)
The ARMs look good. I'm not sure about the X86_64: The 'VPS' are 'shared' but I haven't found any information what exactly they share (shared cores?). Additionally some 2016 reports say the VPS use Intel Atom cores and only the more expensive 'Bare Metal' plans use Xeon Cores. I haven't found any official information on the homepage though so we might have to ask or just try and check available CPU resources / build speed.
Well, the linode is running on a shared server as well (it's been given 2 cores of a 12 core Xeon), so there's not much difference there. It seems that scaleway use specifically Avotons for their low-end bare metal servers (I did a quick look up for comparison: http://ark.intel.com/compare/81908,77982). I guess for the cross compilers, speed doesn't really bother me all that much. I'll probably just leave them to build master only anyway, I don't see it necessary to have instant feedback from PRs with them, an FYI for a recent change breaking something is enough - remember, there are 17 configurations to test! (though I turned off half of them due to lack of disk space). For native builds, having a policy of must never break seems more reasonable.
 Yep, as this is dockerized, anyone can build + debug locally.  This
 the patches should be fixed asap.  Also, if you look at the runtests
 (skipping over the unresolved bits), you'll see that a few of them
 ICE.  I've noticed this for aarch64, maybe others do the same.  This
 is another thing that should be investigated and fixed.

 In other words, I would like to get all of them green, but only have
 so much time. I better start by filtering out the noise first. :-)
I'll try to setup a local builder for debugging later this week or next weekend and see if I can reduce some bugs.
I hope that it should be super easy - I used docker 17.06 and docker-compose 1.8. Should just be `docker-compose up worker-cross` to fire up the build environment. :-)
 BTW: Do you know if there's any way to cluster builds by branch on
 the buildbot main page? I haven't gotten that far in the docs
 yet ;-)
Doesn't look like it. https://github.com/buildbot/buildbot/blob/0d44f0344ff82b707d02c75871df23c5f6b9cb8f/www/base/src/app/home/home.tpl.jade#L18-L24
OK, then this is something to look into (a lot) later. I guess buildbot should allow setting up custom sub pages so there's likely some way to implement a per-branch overview.
I have a quick look to see if gdb or anyone else does this - answer, it seems like they don't, it's just bundled in together. https://gdb-build.sergiodj.net/grid http://buildbot.python.org/all/grid http://buildbot.nektar.info/grid What you see are the build results for the last X changes, irrespective of branch. Regards, Iain.
Jul 04 2017
parent Martin Nowak <code dawg.eu> writes:
On Tuesday, 4 July 2017 at 22:45:45 UTC, Iain Buclaw wrote:
 The ARMs look good. I'm not sure about the X86_64: The 'VPS' 
 are
 'shared' but I haven't found any information what exactly they 
 share
 (shared cores?). Additionally some 2016 reports say the VPS use
 Intel Atom cores and only the more expensive 'Bare Metal' 
 plans use Xeon
 Cores.
Yes, it's Avotons, furthermore many of the virtualized Scaleway x86_64 share a network connected storage controller with many CPU sockets (called LSSD). While this allows migration, it also adds some latency. I'm running the nightly build server on a bare-metal Avoton, about 3x slower single-core than a Xeon, but you get 8 of 'em and full virtualization support https://www.online.net/en/dedicated-server/dedibox-xc. The Jenkins at ci.dlang.org is running on a https://www.online.net/en/dedicated-server/dedibox-lt though it idles so much that I consider to use EC2 spot instances as workers https://github.com/dlang/ci/issues/37.
Jul 05 2017
prev sibling next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Tue, 4 Jul 2017 23:05:23 +0200
schrieb Johannes Pfau <nospam example.com>:

 I'll try to setup a local builder for debugging later this week or
 next weekend and see if I can reduce some bugs.
The ARM config patch problem can be fixed by including arm-protos.h: #include "config.h" #include "system.h" #include "coretypes.h" #include "target.h" #include "arm-protos.h" #include "d/d-target.h" #include "d/d-target-def.h" /* Implement TARGET_D_CPU_VERSIONS for ARM targets. */ -- Johannes
Jul 05 2017
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 5 Jul 2017 12:18:28 +0200
schrieb Johannes Pfau <nospam example.com>:

 Am Tue, 4 Jul 2017 23:05:23 +0200
 schrieb Johannes Pfau <nospam example.com>:
 
 I'll try to setup a local builder for debugging later this week or
 next weekend and see if I can reduce some bugs.  
The ARM config patch problem can be fixed by including arm-protos.h:
SPARC needs #include "memmodel.h" in glibc-d.c (Or probably sparc-protos.h should be fixed to include memmodel instead, as the sparc-protos.h file actually depends on memmodel). powerpc-linux-gnuspe needs a #include "target.h" in config/powerpcspe/powerpcspe-d.c This should fix all gdc buildbot builds. Next step after fixing these build errors is probably adding a gdc -v call to the buildci script after make all-gcc so we can easily check whether predefined versions are defined correctly. -- Johannes
Jul 05 2017
parent "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 5 July 2017 at 15:33, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Wed, 5 Jul 2017 12:18:28 +0200
 schrieb Johannes Pfau <nospam example.com>:

 Am Tue, 4 Jul 2017 23:05:23 +0200
 schrieb Johannes Pfau <nospam example.com>:

 I'll try to setup a local builder for debugging later this week or
 next weekend and see if I can reduce some bugs.
The ARM config patch problem can be fixed by including arm-protos.h:
SPARC needs #include "memmodel.h" in glibc-d.c (Or probably sparc-protos.h should be fixed to include memmodel instead, as the sparc-protos.h file actually depends on memmodel).
Actually, including memmodel.h sounds correct, have done it explicitly for d-target.cc a couple times.
 powerpc-linux-gnuspe needs a #include "target.h" in
 config/powerpcspe/powerpcspe-d.c
Added "tm.h".
 This should fix all gdc buildbot builds. Next step after fixing
 these build errors is probably adding a gdc -v call to the buildci
 script after make all-gcc so we can easily check whether predefined
 versions are defined correctly.
OK. Iain.
Jul 05 2017
prev sibling parent "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 5 July 2017 at 12:18, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Tue, 4 Jul 2017 23:05:23 +0200
 schrieb Johannes Pfau <nospam example.com>:

 I'll try to setup a local builder for debugging later this week or
 next weekend and see if I can reduce some bugs.
The ARM config patch problem can be fixed by including arm-protos.h: #include "config.h" #include "system.h" #include "coretypes.h" #include "target.h" #include "arm-protos.h" #include "d/d-target.h" #include "d/d-target-def.h" /* Implement TARGET_D_CPU_VERSIONS for ARM targets. */ -- Johannes
Hmm, ok. That should probably be tm_p.h then. I've s/target.h/tm.h/ in all patched sources. That's all we need to pull in for CPU related information. Iain.
Jul 05 2017
prev sibling parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 4 July 2017 at 23:05, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Tue, 4 Jul 2017 20:42:52 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:
 BTW: Do you know if there's any way to cluster builds by branch on
 the buildbot main page? I haven't gotten that far in the docs
 yet ;-)
Doesn't look like it. https://github.com/buildbot/buildbot/blob/0d44f0344ff82b707d02c75871df23c5f6b9cb8f/www/base/src/app/home/home.tpl.jade#L18-L24
OK, then this is something to look into (a lot) later. I guess buildbot should allow setting up custom sub pages so there's likely some way to implement a per-branch overview.
I've turned on gridview, and it looks like something close to what you are asking I think. https://buildbot.dgnu.org/#/grid?branch=master Iain
Jul 06 2017
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 6 Jul 2017 10:06:45 +0200
schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 
 I've turned on gridview, and it looks like something close to what you
 are asking I think.
 
 https://buildbot.dgnu.org/#/grid?branch=master
 
Looks great! BTW: Any idea why buildbot built only 7 builds this time? The armhf failure looks also interesting, I'm just checking whether I can reproduce this locally. -- Johannes
Jul 06 2017
parent reply Johannes Pfau <nospam example.com> writes:
Am Fri, 7 Jul 2017 00:52:20 +0200
schrieb Johannes Pfau <nospam example.com>:

 Am Thu, 6 Jul 2017 10:06:45 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:
 
 
 I've turned on gridview, and it looks like something close to what
 you are asking I think.
 
 https://buildbot.dgnu.org/#/grid?branch=master
   
Looks great! BTW: Any idea why buildbot built only 7 builds this time? The armhf failure looks also interesting, I'm just checking whether I can reproduce this locally.
OK, can reproduce. I think it's this in the configure log: configure:7064: /buildbot/GDC/build/./gcc/xgcc -B/buildbot/GDC/build/./gcc/ -B/usr/arm-linux-gnueabihf/bin/ -B/usr/arm-linux-gnueabihf/lib/ -isystem /usr/arm-linux-gnueabihf/include -isystem /usr/arm-linux-gnueabihf/sys-include -o conftest -g -O2 conftest.c conftstm.o >&5 /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtbegin.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtbegin.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /tmp/cczQi3ST.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /tmp/cczQi3ST.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, conftstm.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file conftstm.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtend.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtend.o collect2: error: ld returned 1 exit status Looks like the GCC configuration does not match the ubuntu arm-linux-gnueabihf-gcc specification exactly. -- Johannes
Jul 06 2017
parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 7 July 2017 at 00:57, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Fri, 7 Jul 2017 00:52:20 +0200
 schrieb Johannes Pfau <nospam example.com>:

 Am Thu, 6 Jul 2017 10:06:45 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 I've turned on gridview, and it looks like something close to what
 you are asking I think.

 https://buildbot.dgnu.org/#/grid?branch=master
Looks great! BTW: Any idea why buildbot built only 7 builds this time? The armhf failure looks also interesting, I'm just checking whether I can reproduce this locally.
OK, can reproduce. I think it's this in the configure log: configure:7064: /buildbot/GDC/build/./gcc/xgcc -B/buildbot/GDC/build/./gcc/ -B/usr/arm-linux-gnueabihf/bin/ -B/usr/arm-linux-gnueabihf/lib/ -isystem /usr/arm-linux-gnueabihf/include -isystem /usr/arm-linux-gnueabihf/sys-include -o conftest -g -O2 conftest.c conftstm.o >&5 /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtbegin.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtbegin.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /tmp/cczQi3ST.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /tmp/cczQi3ST.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, conftstm.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file conftstm.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtend.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtend.o collect2: error: ld returned 1 exit status Looks like the GCC configuration does not match the ubuntu arm-linux-gnueabihf-gcc specification exactly.
Could it be missing --with-float= or --with-fpu configure flag perhaps? I'm just finishing up a few changes to the build scripts that turn off building phobos. Can add another check for extra configure flags to be set per-target. Iain.
Jul 06 2017
parent reply Johannes Pfau <nospam example.com> writes:
Am Fri, 7 Jul 2017 01:12:03 +0200
schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 On 7 July 2017 at 00:57, Johannes Pfau via D.gnu
 <d.gnu puremagic.com> wrote:
 Am Fri, 7 Jul 2017 00:52:20 +0200
 schrieb Johannes Pfau <nospam example.com>:
  
 Am Thu, 6 Jul 2017 10:06:45 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:
  
 I've turned on gridview, and it looks like something close to
 what you are asking I think.

 https://buildbot.dgnu.org/#/grid?branch=master
  
Looks great! BTW: Any idea why buildbot built only 7 builds this time? The armhf failure looks also interesting, I'm just checking whether I can reproduce this locally.
OK, can reproduce. I think it's this in the configure log: configure:7064: /buildbot/GDC/build/./gcc/xgcc -B/buildbot/GDC/build/./gcc/ -B/usr/arm-linux-gnueabihf/bin/ -B/usr/arm-linux-gnueabihf/lib/ -isystem /usr/arm-linux-gnueabihf/include -isystem /usr/arm-linux-gnueabihf/sys-include -o conftest -g -O2 conftest.c conftstm.o >&5 /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtbegin.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtbegin.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /tmp/cczQi3ST.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /tmp/cczQi3ST.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, conftstm.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file conftstm.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtend.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtend.o collect2: error: ld returned 1 exit status Looks like the GCC configuration does not match the ubuntu arm-linux-gnueabihf-gcc specification exactly.
Could it be missing --with-float= or --with-fpu configure flag perhaps? I'm just finishing up a few changes to the build scripts that turn off building phobos. Can add another check for extra configure flags to be set per-target. Iain.
Yes, it's likely one of these. I can't test this right now, but the simplest way is running the ubuntu arm-linux-gnueabihf-gcc -v and copy the configuration. We should generally use exactly the same configuration as ubuntu to get reliable results (Otherwise the binutils/libc we use from the distribution might be not compatible with the compiler/libgcc/libstdc++,... libraries we build). I guess explore.dgnu.org uses the same configuration and it uses: Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 6.3.0-16ubuntu6' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libitm --disable-libquadmath --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-armhf-cross --with-arch-directory=arm --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libgcj --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --enable-multilib --disable-sjlj-exceptions --with-arch=armv7-a --with-fpu=vfpv3-d16 --with-float=hard --with-mode=thumb --disable-werror --enable-multilib --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --includedir=/usr/arm-linux-gnueabihf/include -- Johannes
Jul 06 2017
parent reply "Iain Buclaw via D.gnu" <d.gnu puremagic.com> writes:
On 7 July 2017 at 01:37, Johannes Pfau via D.gnu <d.gnu puremagic.com> wrote:
 Am Fri, 7 Jul 2017 01:12:03 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 On 7 July 2017 at 00:57, Johannes Pfau via D.gnu
 <d.gnu puremagic.com> wrote:
 Am Fri, 7 Jul 2017 00:52:20 +0200
 schrieb Johannes Pfau <nospam example.com>:

 Am Thu, 6 Jul 2017 10:06:45 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 I've turned on gridview, and it looks like something close to
 what you are asking I think.

 https://buildbot.dgnu.org/#/grid?branch=master
Looks great! BTW: Any idea why buildbot built only 7 builds this time? The armhf failure looks also interesting, I'm just checking whether I can reproduce this locally.
OK, can reproduce. I think it's this in the configure log: configure:7064: /buildbot/GDC/build/./gcc/xgcc -B/buildbot/GDC/build/./gcc/ -B/usr/arm-linux-gnueabihf/bin/ -B/usr/arm-linux-gnueabihf/lib/ -isystem /usr/arm-linux-gnueabihf/include -isystem /usr/arm-linux-gnueabihf/sys-include -o conftest -g -O2 conftest.c conftstm.o >&5 /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtbegin.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtbegin.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /tmp/cczQi3ST.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /tmp/cczQi3ST.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, conftstm.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file conftstm.o /usr/arm-linux-gnueabihf/bin/ld: error: conftest uses VFP register arguments, /buildbot/GDC/build/./gcc/crtend.o does not /usr/arm-linux-gnueabihf/bin/ld: failed to merge target specific data of file /buildbot/GDC/build/./gcc/crtend.o collect2: error: ld returned 1 exit status Looks like the GCC configuration does not match the ubuntu arm-linux-gnueabihf-gcc specification exactly.
Could it be missing --with-float= or --with-fpu configure flag perhaps? I'm just finishing up a few changes to the build scripts that turn off building phobos. Can add another check for extra configure flags to be set per-target. Iain.
Yes, it's likely one of these. I can't test this right now, but the simplest way is running the ubuntu arm-linux-gnueabihf-gcc -v and copy the configuration. We should generally use exactly the same configuration as ubuntu to get reliable results (Otherwise the binutils/libc we use from the distribution might be not compatible with the compiler/libgcc/libstdc++,... libraries we build). I guess explore.dgnu.org uses the same configuration and it uses: Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 6.3.0-16ubuntu6' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libitm --disable-libquadmath --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-armhf-cross --with-arch-directory=arm --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libgcj --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --enable-multilib --disable-sjlj-exceptions --with-arch=armv7-a --with-fpu=vfpv3-d16 --with-float=hard --with-mode=thumb --disable-werror --enable-multilib --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --includedir=/usr/arm-linux-gnueabihf/include
Yeah, however 90% of that is unused by us. I've added a BUILD_CONFIGURE_FLAGS var to the buildci.sh script. https://github.com/D-Programming-GDC/buildbot-gdc/commit/47f8c7267682f19b3e1ce2afa49034217413451a#diff-711e8a244e68a7a465f29a18e33a22c3R34 Can add a case for armhf and set --with-fpu= later and see if that gets us further. Iain.
Jul 06 2017
parent reply David J Kordsmeier <dkords gmail.com> writes:
On Friday, 7 July 2017 at 00:05:16 UTC, Iain Buclaw wrote:
 On 7 July 2017 at 01:37, Johannes Pfau via D.gnu 
 <d.gnu puremagic.com> wrote:
 Am Fri, 7 Jul 2017 01:12:03 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:

 On 7 July 2017 at 00:57, Johannes Pfau via D.gnu 
 <d.gnu puremagic.com> wrote:
[...]
Could it be missing --with-float= or --with-fpu configure flag perhaps? I'm just finishing up a few changes to the build scripts that turn off building phobos. Can add another check for extra configure flags to be set per-target. Iain.
Yes, it's likely one of these. I can't test this right now, but the simplest way is running the ubuntu arm-linux-gnueabihf-gcc -v and copy the configuration. We should generally use exactly the same configuration as ubuntu to get reliable results (Otherwise the binutils/libc we use from the distribution might be not compatible with the compiler/libgcc/libstdc++,... libraries we build). I guess explore.dgnu.org uses the same configuration and it uses: Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 6.3.0-16ubuntu6' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libitm --disable-libquadmath --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-armhf-cross --with-arch-directory=arm --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libgcj --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --enable-multilib --disable-sjlj-exceptions --with-arch=armv7-a --with-fpu=vfpv3-d16 --with-float=hard --with-mode=thumb --disable-werror --enable-multilib --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --includedir=/usr/arm-linux-gnueabihf/include
Yeah, however 90% of that is unused by us. I've added a BUILD_CONFIGURE_FLAGS var to the buildci.sh script. https://github.com/D-Programming-GDC/buildbot-gdc/commit/47f8c7267682f19b3e1ce2afa49034217413451a#diff-711e8a244e68a7a465f29a18e33a22c3R34 Can add a case for armhf and set --with-fpu= later and see if that gets us further. Iain.
Folks, I am eagerly following the work on GDC related to AARCH64/linux. I am attempting to duplicate results I see here: https://buildbot.dgnu.org/#/builders/2/builds/51 build 51 looks as if it succeeds. In following the configure logs for build 51, I see a buildci.sh script gets run. I would love to know what gcc configure command gets run. Are the build artifacts available for download? In particular, the config.log would be helpful. In my own previous attempts to build on AARCH64, I hit "not implemented" build error in math.d on the ieeeFlags, and would also probably hit "unsupported platform". It seems that my platform doesn't resolve that it is "arm". And so I understand how the gdc development process works, will platform support fixes get back ported into older branches of GDC? Your build is on master, but my question is, are AARCH64 support patches backported into the gdc-7 or gdc-6? It seems like yes, but I haven't gone through the branches in detail yet. Thank you for the ARM support!
Dec 07 2017
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 07 Dec 2017 21:30:04 +0000
schrieb David J Kordsmeier <dkords gmail.com>:

 On Friday, 7 July 2017 at 00:05:16 UTC, Iain Buclaw wrote:
 On 7 July 2017 at 01:37, Johannes Pfau via D.gnu 
 <d.gnu puremagic.com> wrote:  
 Am Fri, 7 Jul 2017 01:12:03 +0200
 schrieb "Iain Buclaw via D.gnu" <d.gnu puremagic.com>:
  
 On 7 July 2017 at 00:57, Johannes Pfau via D.gnu 
 <d.gnu puremagic.com> wrote:  
[...]  
Could it be missing --with-float= or --with-fpu configure flag perhaps? I'm just finishing up a few changes to the build scripts that turn off building phobos. Can add another check for extra configure flags to be set per-target. Iain.
Yes, it's likely one of these. I can't test this right now, but the simplest way is running the ubuntu arm-linux-gnueabihf-gcc -v and copy the configuration. We should generally use exactly the same configuration as ubuntu to get reliable results (Otherwise the binutils/libc we use from the distribution might be not compatible with the compiler/libgcc/libstdc++,... libraries we build). I guess explore.dgnu.org uses the same configuration and it uses: Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 6.3.0-16ubuntu6' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libitm --disable-libquadmath --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-armhf-cross --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-armhf-cross --with-arch-directory=arm --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libgcj --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --enable-multilib --disable-sjlj-exceptions --with-arch=armv7-a --with-fpu=vfpv3-d16 --with-float=hard --with-mode=thumb --disable-werror --enable-multilib --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --includedir=/usr/arm-linux-gnueabihf/include
Yeah, however 90% of that is unused by us. I've added a BUILD_CONFIGURE_FLAGS var to the buildci.sh script. https://github.com/D-Programming-GDC/buildbot-gdc/commit/47f8c7267682f19b3e1ce2afa49034217413451a#diff-711e8a244e68a7a465f29a18e33a22c3R34 Can add a case for armhf and set --with-fpu= later and see if that gets us further. Iain.
Folks, I am eagerly following the work on GDC related to AARCH64/linux. I am attempting to duplicate results I see here: https://buildbot.dgnu.org/#/builders/2/builds/51 build 51 looks as if it succeeds. In following the configure logs for build 51, I see a buildci.sh script gets run. I would love to know what gcc configure command gets run. Are the build artifacts available for download? In particular, the config.log would be helpful. In my own previous attempts to build on AARCH64, I hit "not implemented" build error in math.d on the ieeeFlags, and would also probably hit "unsupported platform". It seems that my platform doesn't resolve that it is "arm". And so I understand how the gdc development process works, will platform support fixes get back ported into older branches of GDC? Your build is on master, but my question is, are AARCH64 support patches backported into the gdc-7 or gdc-6? It seems like yes, but I haven't gone through the branches in detail yet. Thank you for the ARM support!
Sorry to disappoint you, but I think buildbot does not build phobos on AARCH64 yet. Also if you look at the testsuite reports, although buildbots says the testsuite passes, there are still 2417 unresolved testcases and 60 unsupported tests. Most of these are likely missing AARCH64 assembler code. Maybe I can work on AARCH support for one or two days in the christmas holidays. There's probably not much missing. And yes, currently we backport all fixes from master, including phobos changes. -- Johannes
Dec 07 2017
parent David J Kordsmeier <dkords gmail.com> writes:
On Friday, 8 December 2017 at 07:56:04 UTC, Johannes Pfau wrote:
 Am Thu, 07 Dec 2017 21:30:04 +0000
 schrieb David J Kordsmeier <dkords gmail.com>:

 On Friday, 7 July 2017 at 00:05:16 UTC, Iain Buclaw wrote:
 On 7 July 2017 at 01:37, Johannes Pfau via D.gnu 
 <d.gnu puremagic.com> wrote:
 [...]
Yeah, however 90% of that is unused by us. I've added a BUILD_CONFIGURE_FLAGS var to the buildci.sh script. https://github.com/D-Programming-GDC/buildbot-gdc/commit/47f8c7267682f19b3e1ce2afa49034217413451a#diff-711e8a244e68a7a465f29a18e33a22c3R34 Can add a case for armhf and set --with-fpu= later and see if that gets us further. Iain.
Folks, I am eagerly following the work on GDC related to AARCH64/linux. I am attempting to duplicate results I see here: https://buildbot.dgnu.org/#/builders/2/builds/51 build 51 looks as if it succeeds. In following the configure logs for build 51, I see a buildci.sh script gets run. I would love to know what gcc configure command gets run. Are the build artifacts available for download? In particular, the config.log would be helpful. In my own previous attempts to build on AARCH64, I hit "not implemented" build error in math.d on the ieeeFlags, and would also probably hit "unsupported platform". It seems that my platform doesn't resolve that it is "arm". And so I understand how the gdc development process works, will platform support fixes get back ported into older branches of GDC? Your build is on master, but my question is, are AARCH64 support patches backported into the gdc-7 or gdc-6? It seems like yes, but I haven't gone through the branches in detail yet. Thank you for the ARM support!
Sorry to disappoint you, but I think buildbot does not build phobos on AARCH64 yet. Also if you look at the testsuite reports, although buildbots says the testsuite passes, there are still 2417 unresolved testcases and 60 unsupported tests. Most of these are likely missing AARCH64 assembler code. Maybe I can work on AARCH support for one or two days in the christmas holidays. There's probably not much missing. And yes, currently we backport all fixes from master, including phobos changes. -- Johannes
Well that explains everything. Yes, it was clear this is a cross compiler. I am trying to understand what is left here. Disappointment is relative, after all, this is 2017. Nothing can be more disappointing than 2017. AARCH64 is not a major platform and probably isn't worth exploring further until it is proven this platform is here to stay. Maybe when iPhones start using AARCH64...Most people only use PC desktop computers. Seriously man, where do I need to send funds to make sure this gets done? It's been a solid year that this hasn't been resolved for GDC, and we've had AARCH64 platforms in volume since 2016. Rust and Go have resolved their AARCH64 issues with on target compilers as well as runtime support in this past year. I recognize the switch to AARCH64 has been difficult for literally every open source project. If financial incentives can help GDC let me know. I would like to make this a very Merry Christmas for all of us in AARCH64-landia.
Dec 11 2017