www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Make DMD emit C++ .h files same as .di files

reply Manu <turkeyman gmail.com> writes:
I've been talking about this for years, and logged this a while back:
https://issues.dlang.org/show_bug.cgi?id=19579

Is there anyone interested in or knows how to do this?
It would really be super valuable; this idea would motivate
inter-language projects that typically go C++-fist-with-a-D-binding to
work the other way.

Creating a pressure to write D-code first because the binding part is
maintained automatically when you compile is potentially significant;
gets programmers thinking and writing D code as first-class.

Idea would be same as emitting .di files, but in this case only
extern(C)/extern(C++) declarations would be emit to a .h file.

I could use this to create some nice demos in my office which I think
would get a few people talking and excited.
Feb 24 2019
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/24/19 6:24 PM, Manu wrote:
 I've been talking about this for years, and logged this a while back:
 https://issues.dlang.org/show_bug.cgi?id=19579
 
 Is there anyone interested in or knows how to do this?
 It would really be super valuable; this idea would motivate
 inter-language projects that typically go C++-fist-with-a-D-binding to
 work the other way.
 
 Creating a pressure to write D-code first because the binding part is
 maintained automatically when you compile is potentially significant;
 gets programmers thinking and writing D code as first-class.
 
 Idea would be same as emitting .di files, but in this case only
 extern(C)/extern(C++) declarations would be emit to a .h file.
 
 I could use this to create some nice demos in my office which I think
 would get a few people talking and excited.
Yes that would be fantastic, and something I'd advocated for a long time as well. It would be in fact a prime application of the relatively recent ability to use dmd as a library. No need to modify the compiler, just walk the AST and output the header. Should be a couple hundred lines for good effect. Andrei
Feb 24 2019
parent reply Manu <turkeyman gmail.com> writes:
On Sun, Feb 24, 2019 at 4:05 PM Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/24/19 6:24 PM, Manu wrote:
 I've been talking about this for years, and logged this a while back:
 https://issues.dlang.org/show_bug.cgi?id=19579

 Is there anyone interested in or knows how to do this?
 It would really be super valuable; this idea would motivate
 inter-language projects that typically go C++-fist-with-a-D-binding to
 work the other way.

 Creating a pressure to write D-code first because the binding part is
 maintained automatically when you compile is potentially significant;
 gets programmers thinking and writing D code as first-class.

 Idea would be same as emitting .di files, but in this case only
 extern(C)/extern(C++) declarations would be emit to a .h file.

 I could use this to create some nice demos in my office which I think
 would get a few people talking and excited.
Yes that would be fantastic, and something I'd advocated for a long time as well. It would be in fact a prime application of the relatively recent ability to use dmd as a library. No need to modify the compiler, just walk the AST and output the header. Should be a couple hundred lines for good effect.
Why wouldn't you do it in the same pass as the .di output?
Feb 24 2019
parent reply Jacob Carlborg <doob me.com> writes:
On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns * Simplifying the compiler ("simplifying" is not the correct description, rather avoid making the compiler more complex) I think the .di generation should be a separate tool as well. -- /Jacob Carlborg
Feb 25 2019
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Monday, 25 February 2019 at 10:20:34 UTC, Jacob Carlborg wrote:
 On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns * Simplifying the compiler ("simplifying" is not the correct description, rather avoid making the compiler more complex) I think the .di generation should be a separate tool as well.
I don't think so. h and .di generation have to be in sync with the AST. The easiest way to guarantee that it is is by having it in the compiler test-suite. Therefore it should be part of the main compiler. Also it removes the need of having to add another application to your build.
Feb 25 2019
prev sibling next sibling parent Guillaume Piolat <contact spam.org> writes:
On Monday, 25 February 2019 at 10:20:34 UTC, Jacob Carlborg wrote:
 On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns * Simplifying the compiler ("simplifying" is not the correct description, rather avoid making the compiler more complex) I think the .di generation should be a separate tool as well.
+1 Also: is there anyone else that need this? Because it could well be a fringe need (like .di generation)
Feb 25 2019
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 2/25/19 5:20 AM, Jacob Carlborg wrote:
 On 2019-02-25 02:04, Manu wrote:
 
 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns * Simplifying the compiler ("simplifying" is not the correct description, rather avoid making the compiler more complex)
Indeed so. There's also the network effect of tooling. Integrating within the compiler would be like the proverbial "giving someone a fish", whereas framing it as a tool that can be the first inspiring many others is akin to "teaching fishing".
Feb 25 2019
next sibling parent reply Manu <turkeyman gmail.com> writes:
On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/25/19 5:20 AM, Jacob Carlborg wrote:
 On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns
Are we planning to remove .di output?
 * Simplifying the compiler ("simplifying" is not the correct
 description, rather avoid making the compiler more complex)
It seems theoretically very simple to me; whatever the .di code looks like, I can imagine a filter for isExternCorCPP() on candidate nodes when walking the AST. Seems like a pretty simple tweak of the existing code... but I haven't looked at it. I suspect 1 line in the AST walk code, and 99% of the job, a big ugly block that emits a C++ declaration instead of the D declaration?
 Indeed so. There's also the network effect of tooling. Integrating
 within the compiler would be like the proverbial "giving someone a
 fish", whereas framing it as a tool that can be the first inspiring many
 others is akin to "teaching fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much. I'm not a build engineer, and I have no idea how I'd wire a second pass to each source compile if I wanted to. Tell me how to wire that into VS? How do I wite that into XCode? How do I express that in the scripts that emit those project formats, and also makefiles and ninja? How do I express that the outputs (which are .h files) are correctly expressed as inputs of dependent .cpp compile steps? At best, it would take me hours or days of implementing that comprehensive solution, it might not be possible in all build environments (XCode is a disaster), and I will never spare that time. Give me dtoh, you give me problems, not solutions. Certainly it *could* be a separate tool, but your argument that it's more enabling as a separate tool is the opposite of the truth. At best, it'll just waste more precious CI time and complicate our build.
Feb 25 2019
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 2/25/19 2:04 PM, Manu wrote:
 On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 2/25/19 5:20 AM, Jacob Carlborg wrote:
 On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns
Are we planning to remove .di output?
 * Simplifying the compiler ("simplifying" is not the correct
 description, rather avoid making the compiler more complex)
It seems theoretically very simple to me; whatever the .di code looks like, I can imagine a filter for isExternCorCPP() on candidate nodes when walking the AST. Seems like a pretty simple tweak of the existing code... but I haven't looked at it. I suspect 1 line in the AST walk code, and 99% of the job, a big ugly block that emits a C++ declaration instead of the D declaration?
 Indeed so. There's also the network effect of tooling. Integrating
 within the compiler would be like the proverbial "giving someone a
 fish", whereas framing it as a tool that can be the first inspiring many
 others is akin to "teaching fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much.
More like dog's ones, right? :o) There are indeed arguments going either way. The point is a universe of tools can be built based on the compiler as a library, of which only a minority should be realistically integrated within the compiler itself. That said, I'd take such work in either form! Andrei
Feb 25 2019
parent Manu <turkeyman gmail.com> writes:
On Mon, Feb 25, 2019 at 12:25 PM Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/25/19 2:04 PM, Manu wrote:
 On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 2/25/19 5:20 AM, Jacob Carlborg wrote:
 On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns
Are we planning to remove .di output?
 * Simplifying the compiler ("simplifying" is not the correct
 description, rather avoid making the compiler more complex)
It seems theoretically very simple to me; whatever the .di code looks like, I can imagine a filter for isExternCorCPP() on candidate nodes when walking the AST. Seems like a pretty simple tweak of the existing code... but I haven't looked at it. I suspect 1 line in the AST walk code, and 99% of the job, a big ugly block that emits a C++ declaration instead of the D declaration?
 Indeed so. There's also the network effect of tooling. Integrating
 within the compiler would be like the proverbial "giving someone a
 fish", whereas framing it as a tool that can be the first inspiring many
 others is akin to "teaching fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much.
More like dog's ones, right? :o) There are indeed arguments going either way. The point is a universe of tools can be built based on the compiler as a library, of which only a minority should be realistically integrated within the compiler itself.
Right, but in this case, the technology is *already* in the compiler. I'm not suggesting a large new development, just a filter on the output of the existing pass with a bit of a re-format. That form would be so much more readily useful.
 That said, I'd take such work in either form!
Perhaps. But I'd like to strongly encourage a form that's useful to me as best as I can... otherwise it's just a nice talking point and still no practical solution.
Feb 25 2019
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Feb 25, 2019 at 11:04:56AM -0800, Manu via Digitalmars-d wrote:
 On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
[...]
 Indeed so. There's also the network effect of tooling. Integrating
 within the compiler would be like the proverbial "giving someone a
 fish", whereas framing it as a tool that can be the first inspiring
 many others is akin to "teaching fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much. I'm not a build engineer, and I have no idea how I'd wire a second pass to each source compile if I wanted to. Tell me how to wire that into VS? How do I wite that into XCode? How do I express that in the scripts that emit those project formats, and also makefiles and ninja? How do I express that the outputs (which are .h files) are correctly expressed as inputs of dependent .cpp compile steps?
[...] <off-topic rant> This is a perfect example of what has gone completely wrong in the world of build systems. Too many assumptions and poor designs over an extremely simple and straightforward dependency graph walk algorithm, that turn something that ought to be trivial to implement into a gargantuan task that requires a dedicated job title like "build engineer". It's completely insane, yet people accept it as a fact of life. It boggles the mind. </off-topic rant> T -- Those who don't understand Unix are condemned to reinvent it, poorly.
Feb 25 2019
next sibling parent reply Rubn <where is.this> writes:
On Monday, 25 February 2019 at 19:28:54 UTC, H. S. Teoh wrote:
 On Mon, Feb 25, 2019 at 11:04:56AM -0800, Manu via 
 Digitalmars-d wrote:
 On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via 
 Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 Indeed so. There's also the network effect of tooling. 
 Integrating within the compiler would be like the proverbial 
 "giving someone a fish", whereas framing it as a tool that 
 can be the first inspiring many others is akin to "teaching 
 fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much. I'm not a build engineer, and I have no idea how I'd wire a second pass to each source compile if I wanted to. Tell me how to wire that into VS? How do I wite that into XCode? How do I express that in the scripts that emit those project formats, and also makefiles and ninja? How do I express that the outputs (which are .h files) are correctly expressed as inputs of dependent .cpp compile steps?
[...] <off-topic rant> This is a perfect example of what has gone completely wrong in the world of build systems. Too many assumptions and poor designs over an extremely simple and straightforward dependency graph walk algorithm, that turn something that ought to be trivial to implement into a gargantuan task that requires a dedicated job title like "build engineer". It's completely insane, yet people accept it as a fact of life. It boggles the mind. </off-topic rant> T
I don't think it is as simple as you make it seem. Especially when you need to start adding components that need to be build that isn't source code. Add in different different operating systems to that. Each have different requirements and how do you not make assumptions? You have to implement something in some way, you can't just not implement it. Doing so you weigh the benefits and drawbacks of certain implementations. That means a build system may not be suitable for all circumstances and it is impossible for it to. If you are building something simple it is very easy to say that build-systems are over complicated. If you don't have to worry about not rebuilding files that don't have to, then it becomes extremely simple, my <50 line build script is all I need but then that becomes more complicated if I don't want to rebuild files that don't need to. That is especially bad for something like D where you can import any where and with mixins that can import as well using from!"std.stdio". It's easy to say build-systems are overly complicated until you actually work on a big project.
Feb 25 2019
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Feb 25, 2019 at 10:14:18PM +0000, Rubn via Digitalmars-d wrote:
 On Monday, 25 February 2019 at 19:28:54 UTC, H. S. Teoh wrote:
[...]
 <off-topic rant>
 This is a perfect example of what has gone completely wrong in the world
 of build systems. Too many assumptions and poor designs over an
 extremely simple and straightforward dependency graph walk algorithm,
 that turn something that ought to be trivial to implement into a
 gargantuan task that requires a dedicated job title like "build
 engineer".  It's completely insane, yet people accept it as a fact of
 life. It boggles the mind.
 </off-topic rant>
[...]
 I don't think it is as simple as you make it seem. Especially when you
 need to start adding components that need to be build that isn't
 source code.
It's very simple. The build description is essentially a DAG whose nodes represent files (well, any product, really, but let's say files for a concrete example), and whose edges represent commands that transform input files into output files. All the build system has to do is to do a topological walk of this DAG, and execute the commands associated with each edge to derive the output from the input. This is all that's needed. The rest are all fluff. The basic problem with today's build systems is that they impose arbitrary assumptions on top of this simple DAG. For example, all input nodes are arbitrarily restricted to source code files, or in some bad cases, source code of some specific language or set of languages. Then they arbitrarily limit edges to be only compiler invocations and/or linker invocations. So the result is that if you have an input file that isn't source code, or if the output file requires invoking something other than a compiler/linker, then the build system doesn't support it and you're left out in the cold. Worse yet, many "modern" build systems assume a fixed depth of paths in the graph, i.e., you can only compile source files into binaries, you cannot compile a subset of source files into an auxiliary utility that in turn generates new source files that are then compiled into an executable. So automatic code generation is ruled out, preprocessing is ruled out, etc., unless you shoehorn all of that into the compiler invocation, which is a ridiculous idea. None of these restrictions are necessary, and they only needlessly limit what you can do with your build system. I understand that these assumptions are primarily to simplify the build description, e.g., by inferring dependencies so that you don't have to specify edges and nodes yourself (which is obviously impractical for large projects). But these additional niceties ought to be implemented as a SEPARATE layer on top of the topological walk, and the user should not be arbitrarily prevented from directly accessing the DAG description. The way so many build systems are designed is that either you have to do everything manually, like makefiles, which everybody hates, or the hood is welded shut and you can only do what the authors decide that you should be able to do and nothing else. [...]
 It's easy to say build-systems are overly complicated until you
 actually work on a big project.
You seem to think that I'm talking out of an ivory tower. I assure you I know what I'm talking about. I have written actual build systems that do things like this: - Compile a subset of source files into a utility; - Run said utility to transform certain input data files into source code; - Compile the generated source code into executables; - Run said executables on other data files to transform the data into PovRay scene files; - Run PovRay to produce images; - Run post-processing utilities on said images to crop / reborder them; - Run another utility to convert these images into animations; - Install these animations into a target directory. - Compile another set of source files into a different utility; - Run said utility on input files to transform them to PHP input files; - Run php-cli to generate HTML from said input files; - Install said HTML files into a target directory. - Run a network utility to retrieve the history of a specific log file and pipe it through a filter to extract a list of dates. - Run a utility to transform said dates into a gnuplot input file for generating a graph; - Run gnuplot to create the graph; - Run postprocessing image utilities to touch up the image; - Install the result into the target directory. None of the above are baked-in rules. The user is fully capable of specifying whatever transformation he wants on whatever inputs he wants to produce whatever output he wants. No straitjackets, no stupid hacks to work around stupid build system limitations. Tell it how you want your inputs to be transformed into outputs, and it handles the rest for you. Furthermore, the build system is incremental: if I modify any of the above input files, it automatically runs the necessary commands to derive the updated output files AND NOTHING ELSE (i.e., it does not needlessly re-derive stuff that hasn't changed). Better yet, if any of the intermediate output files are identical to the previous outputs, the build stops right there and does not needlessly recreate other outputs down the line. The build system is also reliable: running the build in a dirty workspace produces identical products as running the build in a fresh checkout. I never have to worry about doing the equivalent of 'make clean; make', which is a stupid thing to have to do in 2019. I have a workspace that hasn't been "cleaned" for months, and running the build on it produces exactly the same outputs as a fresh checkout. There's more I can say, but basically, this is the power that having direct access to the DAG can give you. In this day and age, it's inexcusable not to be able to do this. Any build system that cannot do all of the above is a crippled build system that I will not use, because life is far too short to waste fighting with your build system rather than getting things done. T -- English has the lovely word "defenestrate", meaning "to execute by throwing someone out a window", or more recently "to remove Windows from a computer and replace it with something useful". :-) -- John Cowan
Feb 25 2019
next sibling parent reply ted <nospam example.org> writes:
On 26/2/19 9:25 am, H. S. Teoh wrote:
 On Mon, Feb 25, 2019 at 10:14:18PM +0000, Rubn via Digitalmars-d wrote:
 On Monday, 25 February 2019 at 19:28:54 UTC, H. S. Teoh wrote:
[...]
 <off-topic rant>
 This is a perfect example of what has gone completely wrong in the world
 of build systems. Too many assumptions and poor designs over an
 extremely simple and straightforward dependency graph walk algorithm,
 that turn something that ought to be trivial to implement into a
 gargantuan task that requires a dedicated job title like "build
 engineer".  It's completely insane, yet people accept it as a fact of
 life. It boggles the mind.
 </off-topic rant>
[...]
 I don't think it is as simple as you make it seem. Especially when you
 need to start adding components that need to be build that isn't
 source code.
It's very simple. The build description is essentially a DAG whose nodes represent files (well, any product, really, but let's say files for a concrete example), and whose edges represent commands that transform input files into output files. All the build system has to do is to do a topological walk of this DAG, and execute the commands associated with each edge to derive the output from the input. This is all that's needed. The rest are all fluff. The basic problem with today's build systems is that they impose arbitrary assumptions on top of this simple DAG. For example, all input nodes are arbitrarily restricted to source code files, or in some bad cases, source code of some specific language or set of languages. Then they arbitrarily limit edges to be only compiler invocations and/or linker invocations. So the result is that if you have an input file that isn't source code, or if the output file requires invoking something other than a compiler/linker, then the build system doesn't support it and you're left out in the cold. Worse yet, many "modern" build systems assume a fixed depth of paths in the graph, i.e., you can only compile source files into binaries, you cannot compile a subset of source files into an auxiliary utility that in turn generates new source files that are then compiled into an executable. So automatic code generation is ruled out, preprocessing is ruled out, etc., unless you shoehorn all of that into the compiler invocation, which is a ridiculous idea. None of these restrictions are necessary, and they only needlessly limit what you can do with your build system. I understand that these assumptions are primarily to simplify the build description, e.g., by inferring dependencies so that you don't have to specify edges and nodes yourself (which is obviously impractical for large projects). But these additional niceties ought to be implemented as a SEPARATE layer on top of the topological walk, and the user should not be arbitrarily prevented from directly accessing the DAG description. The way so many build systems are designed is that either you have to do everything manually, like makefiles, which everybody hates, or the hood is welded shut and you can only do what the authors decide that you should be able to do and nothing else. [...]
 It's easy to say build-systems are overly complicated until you
 actually work on a big project.
You seem to think that I'm talking out of an ivory tower. I assure you I know what I'm talking about. I have written actual build systems that do things like this: - Compile a subset of source files into a utility; - Run said utility to transform certain input data files into source code; - Compile the generated source code into executables; - Run said executables on other data files to transform the data into PovRay scene files; - Run PovRay to produce images; - Run post-processing utilities on said images to crop / reborder them; - Run another utility to convert these images into animations; - Install these animations into a target directory. - Compile another set of source files into a different utility; - Run said utility on input files to transform them to PHP input files; - Run php-cli to generate HTML from said input files; - Install said HTML files into a target directory. - Run a network utility to retrieve the history of a specific log file and pipe it through a filter to extract a list of dates. - Run a utility to transform said dates into a gnuplot input file for generating a graph; - Run gnuplot to create the graph; - Run postprocessing image utilities to touch up the image; - Install the result into the target directory. None of the above are baked-in rules. The user is fully capable of specifying whatever transformation he wants on whatever inputs he wants to produce whatever output he wants. No straitjackets, no stupid hacks to work around stupid build system limitations. Tell it how you want your inputs to be transformed into outputs, and it handles the rest for you. Furthermore, the build system is incremental: if I modify any of the above input files, it automatically runs the necessary commands to derive the updated output files AND NOTHING ELSE (i.e., it does not needlessly re-derive stuff that hasn't changed). Better yet, if any of the intermediate output files are identical to the previous outputs, the build stops right there and does not needlessly recreate other outputs down the line. The build system is also reliable: running the build in a dirty workspace produces identical products as running the build in a fresh checkout. I never have to worry about doing the equivalent of 'make clean; make', which is a stupid thing to have to do in 2019. I have a workspace that hasn't been "cleaned" for months, and running the build on it produces exactly the same outputs as a fresh checkout. There's more I can say, but basically, this is the power that having direct access to the DAG can give you. In this day and age, it's inexcusable not to be able to do this. Any build system that cannot do all of the above is a crippled build system that I will not use, because life is far too short to waste fighting with your build system rather than getting things done. T
I'd be interested in your thoughts on https://github.com/GrahamStJack/bottom-up-build We use it here (commercial environment with deliverables into defence & commercial customers) (it was created as a response to poor existing build tools). It is rigid on preventing circularities, and deals with code-generation as part of its build cycle. It currently deals with a mixed C++/D codebase of well over 1/2 million lines. It is agnostic to the tool chain - that's part of the configuration - we just use c++ (clang & gcc), and D (dmd & ldc). Also allows codebase to be split amongst multiple repositories. --ted
Feb 25 2019
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Feb 26, 2019 at 11:44:28AM +1030, ted via Digitalmars-d wrote:
[...]
 I'd be interested in your thoughts on
 https://github.com/GrahamStJack/bottom-up-build
 
 We use it here (commercial environment with deliverables into defence
 & commercial customers) (it was created as a response to poor existing
 build tools). It is rigid on preventing circularities, and deals with
 code-generation as part of its build cycle. It currently deals with a
 mixed C++/D codebase of well over 1/2 million lines. It is agnostic to
 the tool chain - that's part of the configuration - we just use c++
 (clang & gcc), and D (dmd & ldc). Also allows codebase to be split
 amongst multiple repositories.
[...] Sorry didn't get around to this until now. At first glance, it looks like a kind of make variant -- sorry if this, or any of the following comments, is wrong, as I only briefly skimmed over the example and some of the bub source code. It seems to follow make's model of global variables for things like compile flags and so forth. Which is convenient, but could lead to messiness in large enough projects -- i.e., it's not clear to me at first glance how to handle compiling source code with different compile options, or how to generate multiple binaries (e.g., native compile + cross compile) from the same set of sources. Perhaps encapsulating these variables in configuration objects (e.g. SCons' "environments") might help with this. Not scanning for dependencies until a target needs to be built is quite a clever idea. I'll have to keep that in mind if I ever decide to invent my own build system. :-D Splitting a codebase across multiple repositories is also a nice idea. I wonder if it could be taken further: in the form of a build graph export into some kind of standard format, that potentially other build tools can import and build with equivalent results. That's the kind of "holy grail" of build tools that I envision these days, a way to unify all the divergent build tools out there and make it possible to integrate projects across different build systems. There seems to be baked-in rules for generating executables, static / dynamic libraries, etc., and it appears that some special handling is done in these cases. It's not clear to me how to achieve equivalent functionality just from the various bub configuration files if the user had to, for example, support a new programming language that needed specialized scanning for dependencies, or how to handle multi-input, multi-output transformations like the Java compiler (does bub support building Java?). Would it be necessary to modify bub source code in order to extend it to handle such cases, or is it already possible with bub.cfg? I took a quick look at bub.planner.doPlanning, and it appears to me that it has to scan the entire source tree in order to determine what file(s) have changed. Which would make the cost of a build proportional to the size of the workspace, rather than the size of the changeset. Is this correct? If so, I'd suggest taking a look at Tup (http://gittup.org/tup/), which uses modern OS features (inotify / the Windows equivalent which I can't remember) to detect which sources have changed, and correspondingly which subgraph of the entire project's DAG is pertinent at the next build command -- the rest of the DAG is skipped, which in a large project can greatly improve turnaround build times. Also, are partial builds supported? T -- Programming is not just an act of telling a computer what to do: it is also an act of telling other programmers what you wished the computer to do. Both are important, and the latter deserves care. -- Andrew Morton
Feb 27 2019
parent ted <nospam example.org> writes:
I'm not the writer of bub, so I'll try and answer as best I can.






On 28/2/19 2:51 am, H. S. Teoh wrote:
 On Tue, Feb 26, 2019 at 11:44:28AM +1030, ted via Digitalmars-d wrote:
 [...]
 I'd be interested in your thoughts on
 https://github.com/GrahamStJack/bottom-up-build

 We use it here (commercial environment with deliverables into defence
 & commercial customers) (it was created as a response to poor existing
 build tools). It is rigid on preventing circularities, and deals with
 code-generation as part of its build cycle. It currently deals with a
 mixed C++/D codebase of well over 1/2 million lines. It is agnostic to
 the tool chain - that's part of the configuration - we just use c++
 (clang & gcc), and D (dmd & ldc). Also allows codebase to be split
 amongst multiple repositories.
[...] Sorry didn't get around to this until now. At first glance, it looks like a kind of make variant -- sorry if this, or any of the following comments, is wrong, as I only briefly skimmed over the example and some of the bub source code. It seems to follow make's model of global variables for things like compile flags and so forth. Which is convenient, but could lead to messiness in large enough projects -- i.e., it's not clear to me at first glance how to handle compiling source code with different compile options, or how to generate multiple binaries (e.g., native compile + cross compile) from the same set of sources. Perhaps encapsulating these variables in configuration objects (e.g. SCons' "environments") might help with this.
This is handled by having multiple bub.cfg files. As the build directory is located elsewhere and the build is performed in that directory, the setup for each build directory is done once - so this has worked well in practice.
 
 Not scanning for dependencies until a target needs to be built is quite
 a clever idea. I'll have to keep that in mind if I ever decide to invent
 my own build system. :-D
 
 Splitting a codebase across multiple repositories is also a nice idea. I
 wonder if it could be taken further: in the form of a build graph export
 into some kind of standard format, that potentially other build tools
 can import and build with equivalent results. That's the kind of "holy
 grail" of build tools that I envision these days, a way to unify all the
 divergent build tools out there and make it possible to integrate
 projects across different build systems.
I know this build graph is available - I believe it can be output from a commandline switch - so getting it into a standard format would be quite easy.
 
 There seems to be baked-in rules for generating executables, static /
 dynamic libraries, etc., and it appears that some special handling is
 done in these cases.  It's not clear to me how to achieve equivalent
 functionality just from the various bub configuration files if the user
 had to, for example, support a new programming language that needed
 specialized scanning for dependencies, or how to handle multi-input,
 multi-output transformations like the Java compiler (does bub support
 building Java?).  Would it be necessary to modify bub source code in
 order to extend it to handle such cases, or is it already possible with
 bub.cfg?
It is a product of the environment in which it is used. i.e. there has been no push for anything else at this stage. I'm fairly sure that Graham would take suggestions like this on board.
 
 I took a quick look at bub.planner.doPlanning, and it appears to me that
 it has to scan the entire source tree in order to determine what file(s)
 have changed. Which would make the cost of a build proportional to the
 size of the workspace, rather than the size of the changeset.  Is this
 correct?  If so, I'd suggest taking a look at Tup
 (http://gittup.org/tup/), which uses modern OS features (inotify / the
 Windows equivalent which I can't remember) to detect which sources have
 changed, and correspondingly which subgraph of the entire project's DAG
 is pertinent at the next build command -- the rest of the DAG is
 skipped, which in a large project can greatly improve turnaround build
 times.
On our codebase, this step takes less than a second, so it hasn't been an issue.
 
 Also, are partial builds supported?
By partial builds, if you mean that it only builds the files that are affected by the code change just made? In that case yes. If you mean the equivalent of a makefile target where a subset of files is defined, then no. (Again, there is no call for this within our environment)
 
 
 T
 
--ted
Feb 28 2019
prev sibling parent reply Rubn <where is.this> writes:
On Monday, 25 February 2019 at 22:55:18 UTC, H. S. Teoh wrote:
 On Mon, Feb 25, 2019 at 10:14:18PM +0000, Rubn via 
 Digitalmars-d wrote:
 On Monday, 25 February 2019 at 19:28:54 UTC, H. S. Teoh wrote:
[...]
 <off-topic rant>
 This is a perfect example of what has gone completely wrong 
 in the world
 of build systems. Too many assumptions and poor designs over 
 an
 extremely simple and straightforward dependency graph walk 
 algorithm,
 that turn something that ought to be trivial to implement 
 into a
 gargantuan task that requires a dedicated job title like 
 "build
 engineer".  It's completely insane, yet people accept it as 
 a fact of
 life. It boggles the mind.
 </off-topic rant>
[...]
 I don't think it is as simple as you make it seem. Especially 
 when you need to start adding components that need to be build 
 that isn't source code.
It's very simple. The build description is essentially a DAG whose nodes represent files (well, any product, really, but let's say files for a concrete example), and whose edges represent commands that transform input files into output files. All the build system has to do is to do a topological walk of this DAG, and execute the commands associated with each edge to derive the output from the input. This is all that's needed. The rest are all fluff. The basic problem with today's build systems is that they impose arbitrary assumptions on top of this simple DAG. For example, all input nodes are arbitrarily restricted to source code files, or in some bad cases, source code of some specific language or set of languages. Then they arbitrarily limit edges to be only compiler invocations and/or linker invocations. So the result is that if you have an input file that isn't source code, or if the output file requires invoking something other than a compiler/linker, then the build system doesn't support it and you're left out in the cold. Worse yet, many "modern" build systems assume a fixed depth of paths in the graph, i.e., you can only compile source files into binaries, you cannot compile a subset of source files into an auxiliary utility that in turn generates new source files that are then compiled into an executable. So automatic code generation is ruled out, preprocessing is ruled out, etc., unless you shoehorn all of that into the compiler invocation, which is a ridiculous idea.
What build systems are you talking about here? I mean I can search for programs that do certain things and I'll most definitely find more subpar ones than any spectacular ones. Especially if they are free. So we are on the same page on which build systems you are referring to.
 None of these restrictions are necessary, and they only 
 needlessly limit what you can do with your build system.

 I understand that these assumptions are primarily to simplify 
 the build description, e.g., by inferring dependencies so that 
 you don't have to specify edges and nodes yourself (which is 
 obviously impractical for large projects).  But these 
 additional niceties ought to be implemented as a SEPARATE layer 
 on top of the topological walk, and the user should not be 
 arbitrarily prevented from directly accessing the DAG 
 description.  The way so many build systems are designed is 
 that either you have to do everything manually, like makefiles, 
 which everybody hates, or the hood is welded shut and you can 
 only do what the authors decide that you should be able to do 
 and nothing else.


 [...]
 It's easy to say build-systems are overly complicated until 
 you actually work on a big project.
You seem to think that I'm talking out of an ivory tower. I assure you I know what I'm talking about. I have written actual build systems that do things like this: - Compile a subset of source files into a utility; - Run said utility to transform certain input data files into source code; - Compile the generated source code into executables; - Run said executables on other data files to transform the data into PovRay scene files; - Run PovRay to produce images; - Run post-processing utilities on said images to crop / reborder them; - Run another utility to convert these images into animations; - Install these animations into a target directory. - Compile another set of source files into a different utility; - Run said utility on input files to transform them to PHP input files; - Run php-cli to generate HTML from said input files; - Install said HTML files into a target directory. - Run a network utility to retrieve the history of a specific log file and pipe it through a filter to extract a list of dates. - Run a utility to transform said dates into a gnuplot input file for generating a graph; - Run gnuplot to create the graph; - Run postprocessing image utilities to touch up the image; - Install the result into the target directory.
Yes doing all those things isn't all that difficult, it really is just a matter of calling a different program to generate the file. The difficulty of build systems comes in when you have an extremely large project that takes a long time to build.
 None of the above are baked-in rules. The user is fully capable 
 of specifying whatever transformation he wants on whatever 
 inputs he wants to produce whatever output he wants.  No 
 straitjackets, no stupid hacks to work around stupid build 
 system limitations. Tell it how you want your inputs to be 
 transformed into outputs, and it handles the rest for you.

 Furthermore, the build system is incremental: if I modify any 
 of the above input files, it automatically runs the necessary 
 commands to derive the updated output files AND NOTHING ELSE 
 (i.e., it does not needlessly re-derive stuff that hasn't 
 changed).  Better yet, if any of the intermediate output files 
 are identical to the previous outputs, the build stops right 
 there and does not needlessly recreate other outputs down the 
 line.

 The build system is also reliable: running the build in a dirty 
 workspace produces identical products as running the build in a 
 fresh checkout.  I never have to worry about doing the 
 equivalent of 'make clean; make', which is a stupid thing to 
 have to do in 2019. I have a workspace that hasn't been 
 "cleaned" for months, and running the build on it produces 
 exactly the same outputs as a fresh checkout.
It really depends on what you are building. Working on DMD I don't have to do a clean, doing a bisect though I effective have to do a clean every new commit.
 There's more I can say, but basically, this is the power that 
 having direct access to the DAG can give you.  In this day and 
 age, it's inexcusable not to be able to do this.

 Any build system that cannot do all of the above is a crippled 
 build system that I will not use, because life is far too short 
 to waste fighting with your build system rather than getting 
 things done.


 T
The build systems I've used can do all that, the problem is about functionality so much as the ease of achieving that functionality. I just use a script, don't need a build system but doing a fully build of my project only takes 10 seconds so I have that luxury.
Feb 25 2019
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Feb 26, 2019 at 01:33:50AM +0000, Rubn via Digitalmars-d wrote:
 On Monday, 25 February 2019 at 22:55:18 UTC, H. S. Teoh wrote:
[...]
 What build systems are you talking about here? I mean I can search for
 programs that do certain things and I'll most definitely find more
 subpar ones than any spectacular ones. Especially if they are free. So
 we are on the same page on which build systems you are referring to.
SCons is free, and does all of what I described and more. It's not perfect, of course. But it's miles better than, say, make -- for its unreliability and the tendency for makefiles to become unreadably complex and unmaintainable. Or dub, for forcing you to work a certain way and unable to express things like multi-stage builds or non-compilation tasks. [...]
 - Compile a subset of source files into a utility;
 
 - Run said utility to transform certain input data files into source
   code;
 
 - Compile the generated source code into executables;
 
 - Run said executables on other data files to transform the data into
   PovRay scene files;
 
 - Run PovRay to produce images;
 
 - Run post-processing utilities on said images to crop / reborder them;
 
 - Run another utility to convert these images into animations;
 
 - Install these animations into a target directory.
[...]
 Yes doing all those things isn't all that difficult, it really is just
 a matter of calling a different program to generate the file.
And yet the above build is not expressible in dub.
 The difficulty of build systems comes in when you have an extremely
 large project that takes a long time to build.
The above steps are part of a project I have whose full build takes about 5-6 hours. But while working on it, the build turnaround time is about 10-15 seconds (and that's only because SCons didn't get one thing right: that build time should be proportional to changeset size, rather than the size of the entire workspace -- otherwise it would be more like 3-4 seconds). *That's* what I call a sane build system. [...]
 It really depends on what you are building. Working on DMD I don't
 have to do a clean, doing a bisect though I effective have to do a
 clean every new commit.
Well exactly, that's the stupidity of it. You always have to 'make clean', "just to be sure", even if "most of the time" it works. It's 2019, and algorithms for reliable builds have been known for at least a decade or more, yet we're still stuck in the dark ages of "occasionally I have to run make clean, and maybe I should do it right now 'cos I'm not sure if this bug is caused by out-of-sync object files or if it's a real bug". Can you imagine how ridiculous it would be with the above 5-6 hour build script, if I had built that project out of makefiles? I would get absolutely nothing done at all if every once in a while I have to `make clean` "just to be sure". Thankfully, SCons is sane enough that I don't have to rerun the entire build for months on end -- actually, I never had to do it. Even when there were big changes that cause almost the whole thing to rebuild, it was SCons that figured out that it had to rebuild everything; I never had to tell it to. Every time I build, no matter what state the workspace was in, it would always update everything correctly. I can even `git checkout <branch>` all over the place, and it doesn't lose track of how to update all relevant targets. I never have to hold its hand to get it to do the right thing, it Just Works(tm). *That's* what I call a sane system. (In spite of said SCons warts.)
 There's more I can say, but basically, this is the power that having
 direct access to the DAG can give you.  In this day and age, it's
 inexcusable not to be able to do this.
 
 Any build system that cannot do all of the above is a crippled build
 system that I will not use, because life is far too short to waste
 fighting with your build system rather than getting things done.
[...]
 The build systems I've used can do all that, the problem is about
 functionality so much as the ease of achieving that functionality.
Well, yes. That's why I repeatedly say, a proper design should empower the user. Easy things should be easy, and hard things should be possible. It shouldn't be the case that easy things are hard (e.g. Manu's "if I have to run an extra step before compilation, I have to bend backwards and recite gibberish in encrypted Reverse Klingon to get make to do the right thing"), and hard things are either outright impossible, or practically impossible because it's so onerous you might as well not bother trying.
 I just use a script, don't need a build system but doing a fully build
 of my project only takes 10 seconds so I have that luxury.
As I said, my website project takes about 5-6 hours for a full, clean build. Anything less than a sane build system -- or a mostly-sane one (SCons does have its warts like I said) -- is simply not even worth my consideration. Life is too short to have to take 6-hour coffee breaks every other day just because make is too dumb to produce reliable builds. T -- We are in class, we are supposed to be learning, we have a teacher... Is it too much that I expect him to teach me??? -- RL
Feb 25 2019
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On Mon, Feb 25, 2019 at 2:55 PM H. S. Teoh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Mon, Feb 25, 2019 at 10:14:18PM +0000, Rubn via Digitalmars-d wrote:
 On Monday, 25 February 2019 at 19:28:54 UTC, H. S. Teoh wrote:
[...]
 <off-topic rant>
 This is a perfect example of what has gone completely wrong in the world
 of build systems. Too many assumptions and poor designs over an
 extremely simple and straightforward dependency graph walk algorithm,
 that turn something that ought to be trivial to implement into a
 gargantuan task that requires a dedicated job title like "build
 engineer".  It's completely insane, yet people accept it as a fact of
 life. It boggles the mind.
 </off-topic rant>
[...]
 I don't think it is as simple as you make it seem. Especially when you
 need to start adding components that need to be build that isn't
 source code.
It's very simple. The build description is essentially a DAG whose nodes represent files (well, any product, really, but let's say files for a concrete example), and whose edges represent commands that transform input files into output files. All the build system has to do is to do a topological walk of this DAG, and execute the commands associated with each edge to derive the output from the input.
You don't know the edges of the DAG until AFTER you run the compiler (ie, discovering imports/#includes, etc from the source code) You also want to run the build with all 64 cores in your machine. File B's build depends on file A's build output, but it can't know that until after it attempts (and fails) to build B... How do you resolve this tension? There's no 'simple' solution to this problem that I'm aware of. You start to address this with higher-level structure, and that is not a 'simple DAG' anymore. Now... whatever solution you concluded; express that in make, ninja, MSBuild, .xcodeproj...
Feb 25 2019
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Feb 25, 2019 at 05:24:00PM -0800, Manu via Digitalmars-d wrote:
 On Mon, Feb 25, 2019 at 2:55 PM H. S. Teoh via Digitalmars-d
[...]
 It's very simple. The build description is essentially a DAG whose
 nodes represent files (well, any product, really, but let's say
 files for a concrete example), and whose edges represent commands
 that transform input files into output files. All the build system
 has to do is to do a topological walk of this DAG, and execute the
 commands associated with each edge to derive the output from the
 input.
You don't know the edges of the DAG until AFTER you run the compiler (ie, discovering imports/#includes, etc from the source code)
Yes, that's what scanners are for. There can be standard scanners for have to write DAG nodes and edges by hand. But my point is that prebaked automatic scanning rules of this sort should not *exclude* you from directly adding your own DAG nodes and edges. Build systems like SCons offer an interface for building your own scanners, for example.
 You also want to run the build with all 64 cores in your machine.
Build systems like SCons offer parallel building out-of-the-box, and require no additional user intervention. That's proper design. Makefiles require special care when writing rules in order not to break, and you (last time I checked) have to explicitly specify which rules are parallelizable. That's bad design.
 File B's build depends on file A's build output, but it can't know
 that until after it attempts (and fails) to build B...
 
 How do you resolve this tension?
There is no tension. You just do a topological walk on the DAG and run the steps in order. If a step fails, all subsequent steps related to that target are aborted. (Any other products that didn't fail may still continue in that case.) Parallelization works by identifying DAG nodes that aren't dependent on each other and running them in parallel. A proper build system handles this automatically without user intervention. Unless you're talking about altering the DAG as you go -- SCons *does* in fact handle this case. You just have to sequence your build steps such that any new products/targets that are introduced don't invalidate prior steps. A topological walk usually already solves this problem, as long as you don't ask for impossible things like building target A also adds a new dependency to unrelated target B. In the normal case, building A adds dependency to downstream target C (which depends on A), but that's no problem because the topological walk guarantees A is built before C, and by then, we already know of the new dependency and can handle it correctly. I'm starting to sound like I'm promoting SCons as the best thing since sliced bread, but actually SCons has its own share of problems. But I'm just using it as an example of a design that got *some* things right. A good number of things, in fact, in spite of the warts that still exist. It's a lot saner than, say, make, and that's my point. Such a design is possible, and has been done (the multi-stage website build I described in my previous post, btw, is an SCons-based system -- it's not perfect, but already miles ahead of ancient junk like makefiles).
 There's no 'simple' solution to this problem that I'm aware of. You
 start to address this with higher-level structure, and that is not a
 'simple DAG' anymore.
It's still a DAG. You just have some fancy automatic scanning / generation at the higher level structure, but it all turns into a DAG in the end. And here is my point: the build system should ALLOW the user to enter custom DAG nodes/edges as needed, rather than force the user to only use the available prebaked rules -- because there will always be a situation where you need to do something the build tool authors haven't thought of. You should always have the option of going under the hood when you need to. You should never be limited only to what the authors had in mind. I have nothing against prebaked automatic scanners -- but that should not preclude writing your *own* custom scanners if you wanted to. And it should not prevent you from adding rules to the DAG directly. The correct design is always the one that empowers the user, not the one that spoonfeeds the user yet comes in a straitjacket.
 Now... whatever solution you concluded; express that in make, ninja,
 MSBuild, .xcodeproj...
The fact that doing all of this in make (or whatever else) is such a challenge is exactly proof of what I'm saying: these build systems are fundamentally b0rken, and for no good reason. All the technology necessary to make sane builds possible already exists. It's just that too many build systems are still living in the 80's and refusing to move on. And in the meantime, even better build systems are already being implemented, like Tup, where the build time is proportional to the size of change rather than the size of the workspace (an SCons wart). Yet people still use make like it's still 1985, and people still invent build systems with antiquated designs like it's still 1985. T -- Give a man a fish, and he eats once. Teach a man to fish, and he will sit forever.
Feb 25 2019
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 2/25/19 2:28 PM, H. S. Teoh wrote:
 On Mon, Feb 25, 2019 at 11:04:56AM -0800, Manu via Digitalmars-d wrote:
 On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
[...]
 Indeed so. There's also the network effect of tooling. Integrating
 within the compiler would be like the proverbial "giving someone a
 fish", whereas framing it as a tool that can be the first inspiring
 many others is akin to "teaching fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much. I'm not a build engineer, and I have no idea how I'd wire a second pass to each source compile if I wanted to. Tell me how to wire that into VS? How do I wite that into XCode? How do I express that in the scripts that emit those project formats, and also makefiles and ninja? How do I express that the outputs (which are .h files) are correctly expressed as inputs of dependent .cpp compile steps?
[...] <off-topic rant> This is a perfect example of what has gone completely wrong in the world of build systems. Too many assumptions and poor designs over an extremely simple and straightforward dependency graph walk algorithm, that turn something that ought to be trivial to implement into a gargantuan task that requires a dedicated job title like "build engineer". It's completely insane, yet people accept it as a fact of life. It boggles the mind. </off-topic rant>
Hear, hear. When adding another step to a build process ISN'T a simple "add a line to the script", then something has gone very, VERY wrong. (Incidentally, this is part of why I've long since lost all patience for trying to use IDEs like VS, Eclipse, XCode, and whatnot. Life's too short to tolerate all that mess of complexity they turn a basic build into. I still *HATE* with a passion, the fact I have to put up will all that black-box-build bullshit when I use Unity3D - and don't even get me started on the complete and utter garbage that is MSBuild (used by Unity, naturally)). HOWEVER: All that said, when a single build needs to make multiple passes of *the same sources* through the compiler, that's clearly an architectural failure on the part of the tooling. We can argue all we want about how separate tools is technically superior, but if means adding duplicate passes *and* extra complications to the user's buildsystem, then it clearly ISN'T "technically superior", it's just a different set of tradeoffs and yet another example of D letting perfect be the enemy of good.
Feb 27 2019
parent Daniel N <no public.email> writes:
On Wednesday, 27 February 2019 at 18:48:06 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 2/25/19 2:28 PM, H. S. Teoh wrote:
 On Mon, Feb 25, 2019 at 11:04:56AM -0800, Manu via 
 Digitalmars-d wrote:
 <off-topic rant>
 This is a perfect example of what has gone completely wrong in 
 the world
 of build systems. Too many assumptions and poor designs over an
 extremely simple and straightforward dependency graph walk 
 algorithm,
 that turn something that ought to be trivial to implement into 
 a
 gargantuan task that requires a dedicated job title like "build
 engineer".  It's completely insane, yet people accept it as a 
 fact of
 life. It boggles the mind.
 </off-topic rant>
 
Hear, hear. When adding another step to a build process ISN'T a simple "add a line to the script", then something has gone very, VERY wrong.
I strongly agree, please consider adding it to the compiler, I had enough of insane build systems for one lifetime, the only buildsystem I need is "dmd -i" If you feel there is a need for a more modular aproach, then I'd rather see the posibility of adding "end user developed" dynamic library plugins to the compiler instead, that aproach can also spur creativity from outside developers just as well as separate tools can.
Feb 27 2019
prev sibling parent Manu <turkeyman gmail.com> writes:
On Mon, Feb 25, 2019 at 11:29 AM H. S. Teoh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Mon, Feb 25, 2019 at 11:04:56AM -0800, Manu via Digitalmars-d wrote:
 On Mon, Feb 25, 2019 at 10:10 AM Andrei Alexandrescu via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
[...]
 Indeed so. There's also the network effect of tooling. Integrating
 within the compiler would be like the proverbial "giving someone a
 fish", whereas framing it as a tool that can be the first inspiring
 many others is akin to "teaching fishing".
That sounds nice, but it's bollocks though; give me dtoh, i'm about 95% less likely to use it. It's easy to add a flag to the command line of our hyper-complex build, but reworking custom tooling into it, not so much. I'm not a build engineer, and I have no idea how I'd wire a second pass to each source compile if I wanted to. Tell me how to wire that into VS? How do I wite that into XCode? How do I express that in the scripts that emit those project formats, and also makefiles and ninja? How do I express that the outputs (which are .h files) are correctly expressed as inputs of dependent .cpp compile steps?
[...] <off-topic rant> This is a perfect example of what has gone completely wrong in the world of build systems. Too many assumptions and poor designs over an extremely simple and straightforward dependency graph walk algorithm, that turn something that ought to be trivial to implement into a gargantuan task that requires a dedicated job title like "build engineer". It's completely insane, yet people accept it as a fact of life. It boggles the mind. </off-topic rant>
I couldn't agree more (that existing solutions make it harder than it needs to be... not that it's actually easy in the first place), but this is how it is. I can't change that, and I have work to do. Is D an ecosystem that I use to get my work done, or is it one that I use to do some intellectual masturbation on the weekend? I've been tirelessly trying to make the former my reality for a long time now... I've failed so far... I don't know what to do to correct the trajectory. I figure, if you just try and clear the hurdles; one by one, they will eventually be cleared. But I've also become tired in the meantime.
Feb 25 2019
prev sibling parent Manu <turkeyman gmail.com> writes:
On Mon, Feb 25, 2019 at 2:25 AM Jacob Carlborg via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2019-02-25 02:04, Manu wrote:

 Why wouldn't you do it in the same pass as the .di output?
* Separation of concerns * Simplifying the compiler ("simplifying" is not the correct description, rather avoid making the compiler more complex) I think the .di generation should be a separate tool as well.
Compile times already suck pretty hard, I feel like it's a very valuable feature that DMD can emit .di from the same compile pass. It's already done all the work... why repeat that build cost a second time for every source file?
Feb 25 2019
prev sibling next sibling parent reply Nicolas D <nicolas serveur.io> writes:
On Sunday, 24 February 2019 at 23:24:53 UTC, Manu wrote:
 I've been talking about this for years, and logged this a while 
 back: https://issues.dlang.org/show_bug.cgi?id=19579

 Is there anyone interested in or knows how to do this?
 It would really be super valuable; this idea would motivate
 inter-language projects that typically go 
 C++-fist-with-a-D-binding to
 work the other way.

 Creating a pressure to write D-code first because the binding 
 part is maintained automatically when you compile is 
 potentially significant; gets programmers thinking and writing 
 D code as first-class.

 Idea would be same as emitting .di files, but in this case only
 extern(C)/extern(C++) declarations would be emit to a .h file.

 I could use this to create some nice demos in my office which I 
 think would get a few people talking and excited.
That sounds like a cool project and it's something I'm a bit used to working with, I just need to get familiar with DMD's source and I can start working on that.
Feb 24 2019
parent reply Seb <seb wilzba.ch> writes:
On Monday, 25 February 2019 at 00:09:48 UTC, Nicolas D wrote:
 That sounds like a cool project and it's something I'm a bit 
 used to working with, I just need to get familiar with DMD's 
 source and I can start working on that.
Have a look at this PR for a head-start on C++ header generation with DMD: https://github.com/dlang/dmd/pull/8591 There's also Mihails's dtoh (for C bindings): https://gitlab.com/mihails.strasuns/dtoh
Feb 24 2019
parent Jacob Carlborg <doob me.com> writes:
On 2019-02-25 02:00, Seb wrote:

 Have a look at this PR for a head-start on C++ header generation with DMD:
 
 https://github.com/dlang/dmd/pull/8591
 
 There's also Mihails's dtoh (for C bindings):
 
 https://gitlab.com/mihails.strasuns/dtoh
And: https://github.com/thewilsonator/dtoh -- /Jacob Carlborg
Feb 25 2019
prev sibling parent Andrea Fontana <nospam example.org> writes:
On Sunday, 24 February 2019 at 23:24:53 UTC, Manu wrote:
 I've been talking about this for years, and logged this a while 
 back: https://issues.dlang.org/show_bug.cgi?id=19579

 Is there anyone interested in or knows how to do this?
 It would really be super valuable; this idea would motivate
 inter-language projects that typically go 
 C++-fist-with-a-D-binding to
 work the other way.
+1
Feb 25 2019