www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Having a bit if fun on stackoverflow

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-github-says-its-d

Andrei
Jun 24 2013
next sibling parent "Jonathan Dunlap" <jadit2 gmail.com> writes:
It's like D is getting free advertising ;)

On Monday, 24 June 2013 at 15:45:27 UTC, Andrei Alexandrescu 
wrote:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-github-says-its-d

 Andrei
Jun 24 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/24/2013 8:45 AM, Andrei Alexandrescu wrote:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-github-says-its-d
Obviously the best way for him to correct his repository is to get it to compile with a D compiler.
Jun 24 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 6/24/13, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-github-says-its-d
This does show that Github's language popularity index is unreliable. number would look like with proper language discovery.
Jun 24 2013
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 24 Jun 2013 08:45:26 -0700
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-github-says-its-d
 
 Andrei
This is why you don't put automatically generated files in version control ... Especially when they have the file ending used by an indexed PL on GitHub ;) -- Marco
Jun 24 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, June 25, 2013 08:38:01 Marco Leise wrote:
 Am Mon, 24 Jun 2013 08:45:26 -0700
 
 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-gith
 ub-says-its-d
 
 Andrei
This is why you don't put automatically generated files in version control ... Especially when they have the file ending used by an indexed PL on GitHub ;)
Yeah. That was the great faux pas of that question. I'm not aware of any good reason to put generated files in version control unless they were only generated once and will never be generated again. github probably _should_ give you the chance to tell them what your code is written in though (bitbucket asks you when you create the repo). - Jonathan M Davis
Jun 24 2013
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 25 June 2013 at 06:46:28 UTC, Jonathan M Davis wrote:
 On Tuesday, June 25, 2013 08:38:01 Marco Leise wrote:
 Am Mon, 24 Jun 2013 08:45:26 -0700
 
 schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-gith
 ub-says-its-d
 
 Andrei
This is why you don't put automatically generated files in version control ... Especially when they have the file ending used by an indexed PL on GitHub ;)
Yeah. That was the great faux pas of that question. I'm not aware of any good reason to put generated files in version control unless they were only generated once and will never be generated again. - Jonathan M Davis
Well, depends how you use the version control I guess. You *can* use it for more than just going back in time or concurrent edits: You can use it as a redistributable network folder. The company I work for does it that way. It means when you checkout a project, you don't have to run 10+ different tools to generate whatever it needs to generate: You are ready to roll. You save on time and headaches. Whenever someone changes the xml, you don't have to regenerate everything every time you resync. The overall time and overhead wasted by a few guys checking in their generated files is more than made up for everyone else not having to worry (or even know) about it. But to each their own of course, this works for _us_ . Also, it means you can look at the generated files inside the repository.
Jun 24 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Jun 25, 2013 at 08:55:04AM +0200, monarch_dodra wrote:
 On Tuesday, 25 June 2013 at 06:46:28 UTC, Jonathan M Davis wrote:
On Tuesday, June 25, 2013 08:38:01 Marco Leise wrote:
Am Mon, 24 Jun 2013 08:45:26 -0700

schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-gith
 ub-says-its-d
 Andrei
This is why you don't put automatically generated files in version control ... Especially when they have the file ending used by an indexed PL on GitHub ;)
Yeah. That was the great faux pas of that question. I'm not aware of any good reason to put generated files in version control unless they were only generated once and will never be generated again. - Jonathan M Davis
Well, depends how you use the version control I guess. You *can* use it for more than just going back in time or concurrent edits: You can use it as a redistributable network folder. The company I work for does it that way. It means when you checkout a project, you don't have to run 10+ different tools to generate whatever it needs to generate: You are ready to roll. You save on time and headaches. Whenever someone changes the xml, you don't have to regenerate everything every time you resync. The overall time and overhead wasted by a few guys checking in their generated files is more than made up for everyone else not having to worry (or even know) about it. But to each their own of course, this works for _us_.
[...] This can backfire in ugly ways if not used carefully. At my work, there are some auto-generated files (tool-generated source code) that get checked into version control, which generally works fine... then we got into a state where the makefile builds stuff that requires the generated files before they're actually generated. When somebody then modifies whatever is used to generate said files but forgets to check in the new version of the generated files, you get into nasty nigh-untraceable inconsistencies where part of the build picks up an old version of said file but the rest of the build picks up the new version. To make things worse, official release builds are always made from a fresh checkout, so release builds sometimes have bugs that mysteriously vanish when you build the same version of the code locally. Very frustrating when trying to track down customer-reported bugs. Not to mention, sometimes generated files have formats that include timestamps that get updated every time they're rebuilt, which produces spurious "revisions" in version control that stores the exact same versions of the files, just with different timestamps. In general, this practice is the source of a lot of needless grief, so I've come to be of the opinion that it's a bad idea. T -- It is not the employer who pays the wages. Employers only handle the money. It is the customer who pays the wages. -- Henry Ford
Jun 25 2013
parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Tuesday, 25 June 2013 at 12:17:24 UTC, H. S. Teoh wrote:
 On Tue, Jun 25, 2013 at 08:55:04AM +0200, monarch_dodra wrote:
 On Tuesday, 25 June 2013 at 06:46:28 UTC, Jonathan M Davis 
 wrote:
On Tuesday, June 25, 2013 08:38:01 Marco Leise wrote:
Am Mon, 24 Jun 2013 08:45:26 -0700

schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-gith
 ub-says-its-d
 Andrei
This is why you don't put automatically generated files in version control ... Especially when they have the file ending used by an indexed PL on GitHub ;)
Yeah. That was the great faux pas of that question. I'm not aware of any good reason to put generated files in version control unless they were only generated once and will never be generated again. - Jonathan M Davis
Well, depends how you use the version control I guess. You *can* use it for more than just going back in time or concurrent edits: You can use it as a redistributable network folder. The company I work for does it that way. It means when you checkout a project, you don't have to run 10+ different tools to generate whatever it needs to generate: You are ready to roll. You save on time and headaches. Whenever someone changes the xml, you don't have to regenerate everything every time you resync. The overall time and overhead wasted by a few guys checking in their generated files is more than made up for everyone else not having to worry (or even know) about it. But to each their own of course, this works for _us_.
[...] This can backfire in ugly ways if not used carefully. At my work, there are some auto-generated files (tool-generated source code) that get checked into version control, which generally works fine... then we got into a state where the makefile builds stuff that requires the generated files before they're actually generated. When somebody then modifies whatever is used to generate said files but forgets to check in the new version of the generated files, you get into nasty nigh-untraceable inconsistencies where part of the build picks up an old version of said file but the rest of the build picks up the new version. To make things worse, official release builds are always made from a fresh checkout, so release builds sometimes have bugs that mysteriously vanish when you build the same version of the code locally. Very frustrating when trying to track down customer-reported bugs. Not to mention, sometimes generated files have formats that include timestamps that get updated every time they're rebuilt, which produces spurious "revisions" in version control that stores the exact same versions of the files, just with different timestamps. In general, this practice is the source of a lot of needless grief, so I've come to be of the opinion that it's a bad idea. T
I guess that depends whether or not F5 is your build process(http://www.codinghorror.com/blog/2007/10/the-f5-key-is-not-a-build-process.html). If you rely on your IDE to compile and run your project, then you usually want to check in those auto-generated files - because when you generated them for your local copy, you had to use different tools, download some libraries, configure your IDE etc - and you want to save other people(or yourself on another computer) the trouble of doing it all again - not to mention to save yourself the trouble of documenting exactly what you did so others can follow. On the other hand, if you use a proper build system, you can - and should - configure your build file to auto-generate those files using external tools, and maybe even use a dependency manager to download those libraries. Not only does the build system's ability to easily generate those auto-generated files make checking them in redundant - it also makes it more troublesome. If you had to manually configure and invoke a tool to generate a file, chances are you'll only do that again when you really have to, but if the build system does that for you - usually as a part of a bigger task - that file will be updated automatically by many people times and again. Having the SCM handle such files will add redundant burden to it and even worse - can cause pointless merge conflicts.
Jun 26 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jun 26, 2013 at 10:23:21PM +0200, Idan Arye wrote:
 On Tuesday, 25 June 2013 at 12:17:24 UTC, H. S. Teoh wrote:
On Tue, Jun 25, 2013 at 08:55:04AM +0200, monarch_dodra wrote:
On Tuesday, 25 June 2013 at 06:46:28 UTC, Jonathan M Davis
wrote:
On Tuesday, June 25, 2013 08:38:01 Marco Leise wrote:
Am Mon, 24 Jun 2013 08:45:26 -0700

schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 http://stackoverflow.com/questions/17263604/i-have-a-c-repository-but-gith
 ub-says-its-d
 Andrei
This is why you don't put automatically generated files in version control ... Especially when they have the file ending used by an indexed PL on GitHub ;)
Yeah. That was the great faux pas of that question. I'm not aware of any good reason to put generated files in version control unless they were only generated once and will never be generated again. - Jonathan M Davis
Well, depends how you use the version control I guess. You *can* use it for more than just going back in time or concurrent edits: You can use it as a redistributable network folder. The company I work for does it that way. It means when you checkout a project, you don't have to run 10+ different tools to generate whatever it needs to generate: You are ready to roll. You save on time and headaches. Whenever someone changes the xml, you don't have to regenerate everything every time you resync. The overall time and overhead wasted by a few guys checking in their generated files is more than made up for everyone else not having to worry (or even know) about it. But to each their own of course, this works for _us_.
[...] This can backfire in ugly ways if not used carefully. At my work, there are some auto-generated files (tool-generated source code) that get checked into version control, which generally works fine... then we got into a state where the makefile builds stuff that requires the generated files before they're actually generated. When somebody then modifies whatever is used to generate said files but forgets to check in the new version of the generated files, you get into nasty nigh-untraceable inconsistencies where part of the build picks up an old version of said file but the rest of the build picks up the new version.
[...]
In general, this practice is the source of a lot of needless grief,
so I've come to be of the opinion that it's a bad idea.


T
I guess that depends whether or not F5 is your build process (http://www.codinghorror.com/blog/2007/10/the-f5-key-is-not-a-build-process.html).
What's F5?
 If you rely on your IDE to compile and run your project, then you
 usually want to check in those auto-generated files - because when
 you generated them for your local copy, you had to use different
 tools, download some libraries, configure your IDE etc - and you
 want to save other people(or yourself on another computer) the
 trouble of doing it all again - not to mention to save yourself the
 trouble of documenting exactly what you did so others can follow.
We don't use IDEs where I work. Or at least, we frown on them. :-P We like to force developers to actually think about build processes instead of just hitting a key and assuming everything is OK. One big reason is that we want builds to be reproducible, not dependent on strange IDE settings some key developer happens to have that nobody else can replicate. So we use makefiles... which are a royal PITA, but at least they give you a semblance of reproducibility (fresh version control checkout, run ./configure && make, and it produces a usable product at the end). I have a lot of gripes about makefiles and would never use such broken decrepit technology in my own projects, but they are nevertheless better on the reproducibility front than some IDE "build process" that nobody knows how to replicate after the key developer leaves the company.
 On the other hand, if you use a proper build system, you can - and
 should - configure your build file to auto-generate those files using
 external tools, and maybe even use a dependency manager to download
 those libraries.
We do all that. But when you have (more than) 50 people working on the same source tree, the dynamics are rather different. In theory, it's a single Makefile hierarchy, but in reality it's a hodgepodge of ugly hacks and shoehorning of poorly-written Makefiles that only barely manage to build successfully when you do a fresh checkout. But that's not really relevant. Here's an illustration of the problem at hand: (1) The external tools are built from source in the source tree; (2) They need to be built first, then run as part of the build process to produce the auto-generated files; (3) Somebody unwisely decides to check in the generated file(s). (4) Later on, some unknowing developer comes along and say, hey look! file xyz.h already exists, so let's use it in my new code! -- not realizing that xyz.h is auto-generated *later* on in the build process than the new code; (5) Some changes are necessary to whatever data the external tools use to produce xyz.h, so now we have a new version of xyz.h. However, since our developers don't directly checkin stuff to version control (they submit patches to the reviewers), sometimes people forget to include the new xyz.h in the patch. So now the version of xyz.h in version control doesn't match the input data to the external tools. (6) The release team updates their workspace, which pulls in the new data used to generate xyz.h, but xyz.h itself isn't updated. They run a build, and half the source tree is compiled with the wrong version of xyz.h, and the other half with the new version (when later on in the makefile xyz.h is regenerated from the new data). (7) The build is released to the customer, who reports strange runtime errors with inscrutible stack traces (due to ABI mismatch). (8) The devs can't reproduce the problem, 'cos by the time it's reported, they've built their workspace several hundred times, and the old xyz.h is long gone. They stare at the code until their gaze bores two holes through their monitor, and they still can't locate the problem.
 Not only does the build system's ability to easily generate those
 auto-generated files make checking them in redundant - it also makes
 it more troublesome. If you had to manually configure and invoke a
 tool to generate a file, chances are you'll only do that again when
 you really have to, but if the build system does that for you -
 usually as a part of a bigger task - that file will be updated
 automatically by many people times and again.
Nobody (I hope!) is foolish enough to depend on hand-configured tools to generate software that's to be released to customers. That's a formula for utter abject failure. You *need* to make sure there's a *reproducible*, *reliable* way to build your software *automatically*, so that when a customer reports a problem in build 1234, you can checkout version 1234 from version control and reproduce exactly the binaries that's distributed to the customer, thereby be able to reliably interpret stack traces, reproduce old bugs, etc.. It would really *really* suck if version 1234 was released before somebody reconfigured some obscure IDE setting or external tool, and now we don't remember how to build the same version 1234 that the customer is running.
 Having the SCM handle such files will add redundant burden to it and
 even worse - can cause pointless merge conflicts.
That's why I said, auto-generated files should NOT be included in version control. Unfortunately it's still being done here at my work, and every now and then we have to deal with silly spurious merge conflicts in addition to subtle ABI inconsistency bugs like I described above. T -- It's amazing how careful choice of punctuation can leave you hanging:
Jun 26 2013
parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Wednesday, 26 June 2013 at 21:28:08 UTC, H. S. Teoh wrote:
 On Wed, Jun 26, 2013 at 10:23:21PM +0200, Idan Arye wrote:
 I guess that depends whether or not F5 is your build process
 (http://www.codinghorror.com/blog/2007/10/the-f5-key-is-not-a-build-process.html).
What's F5?
In Eclipse, F5 is the key to compile your project and run it in debug mode. The link I've given is to a blog entry by Jeff Atwood, where he explains why it's so bad to use F5 as your "build process" - that is, rely on the IDE to build your project. Ofcourse, nothing is wrong with the F5 key itself. In my Vim settings I've mapped F5 to launch a proper build system.
 So we use makefiles... which are a royal PITA, but at least 
 they give
 you a semblance of reproducibility (fresh version control 
 checkout, run
 ./configure && make, and it produces a usable product at the 
 end). I
 have a lot of gripes about makefiles and would never use such 
 broken
 decrepit technology in my own projects, but they are 
 nevertheless better
 on the reproducibility front than some IDE "build process" that 
 nobody
 knows how to replicate after the key developer leaves the 
 company.
Add that to the long, long list of crappy tools that became the standard...
 That's why I said, auto-generated files should NOT be included 
 in
 version control. Unfortunately it's still being done here at my 
 work,
 and every now and then we have to deal with silly spurious merge
 conflicts in addition to subtle ABI inconsistency bugs like I 
 described
 above.
My point is that bad practices lead to more bad practices: Letting your IDE automatically handle the details of the building process is bad. As your project become more complicated, and you need to use third party libraries and auto-generated files, then the bad practice of using F5 as your build process forces you to other bad practices - downloading those third libraries and using those generation tools manually. Even if you document what you did - which is far better than *not* documenting it - it's still a bad practice, since configuring a build system to do those things is a bit easier than explaining in English what needs to be done, and invoking the build system is much easier than following the instructions manually. If you need to use SCM things get even worse. Other people will need to build the project, so they will need those libraries and auto-generated files. If you used a build system and a dependency manager, that would be easy - but you didn't, so now the other guys need to follow your documented instructions manually(assuming there are documented instructions) - and it becomes very cumbersome. And it gets worse - if someone changes something in that textual "build system" - that is, does something and writes it in the documentation - everyone else need to do it. But people don't reread the how-to-configure document every time they do a checkout - so now you need to email everybody about the change. Now, what if someone called sick for a couple of weeks? Now he has to scan through the mailing list to collect all the mails about changes to the configuration process. Alternatively, he could scan the configuration instruction document and compare it to what he has already done - assuming he remembers he ran tool A, but not tool B, and he ran tool C but with different flags than what's specified in the up-to-date instructions. Another option is to use the SCM's diff - but that's still a pain, and frankly - I don't think people that can't use a build system are smart enough to use diff... So, the best thing to do is to check in those auto-generated files and those external libraries, and let the SCM keep everyone synced. That's a crappy solution - but if you don't use a build system, it's your best solution. And when bad practice becomes your best solution - you know you have a problem.
Jun 26 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 27, 2013 at 01:07:19AM +0200, Idan Arye wrote:
 On Wednesday, 26 June 2013 at 21:28:08 UTC, H. S. Teoh wrote:
[...]
So we use makefiles... which are a royal PITA, but at least they give
you a semblance of reproducibility (fresh version control checkout,
run ./configure && make, and it produces a usable product at the
end).  I have a lot of gripes about makefiles and would never use
such broken decrepit technology in my own projects, but they are
nevertheless better on the reproducibility front than some IDE "build
process" that nobody knows how to replicate after the key developer
leaves the company.
Add that to the long, long list of crappy tools that became the standard...
Tell me about it. Makefiles have so many dark nasty corners that any non-trivial application will have an unreadable, unmaintainable makefile. Not to mention reliance on timestamps (very unreliable), and inability to parallel-build without specifically crafting the makefile to support that, etc.. [...]
 My point is that bad practices lead to more bad practices:
 
 Letting your IDE automatically handle the details of the building
 process is bad.
+1.
 As your project become more complicated, and you need to use third
 party libraries and auto-generated files, then the bad practice of
 using F5 as your build process forces you to other bad practices -
 downloading those third libraries and using those generation tools
 manually.
Stop right there. As soon as "manual" enters the picture, you no longer have a build process. You may have a *caricature* of a build process, but it's no build process at all. I don't care if it's hitting F5 or running make, if I cannot (check out the code from version control / download and unpack the source tarball) and *automatically* recreate the entire distribution binary by running a (script / makefile / whatever), then it's not a build process. To me, a build process means it's possible to write a script that, given just the pure source tree, can recreate, without any human intervention, the entire binary blob that you give your customers. Anything short of that does not qualify as a build process. A hand-written document that explains the 50+1 gcc/dmd/whatever commands you must type at the command prompt to build the software does not qualify as a build system.
 Now, what if someone called sick for a couple of weeks? Now he has to
 scan through the mailing list to collect all the mails about changes
 to the configuration process. Alternatively, he could scan the
 configuration instruction document and compare it to what he has
 already done - assuming he remembers he ran tool A, but not tool B,
 and he ran tool C but with different flags than what's specified in
 the up-to-date instructions. Another option is to use the SCM's diff -
 but that's still a pain, and frankly - I don't think people that can't
 use a build system are smart enough to use diff...
If you have to manually type anything more than "gcc -o prog prog.c" to build a project (and that includes adding compile flags), that project has already failed.
 So, the best thing to do is to check in those auto-generated files and
 those external libraries, and let the SCM keep everyone synced.
 That's a crappy solution - but if you don't use a build system, it's
 your best solution. And when bad practice becomes your best solution -
 you know you have a problem.
If you don't have a build system, your project is already doomed. Nevermind auto-generated files, external libraries, or SCMs, those are just nails in the coffin. Any project that spans more than a single source file (and I don't mean just code -- that includes data, autogenerated files, whatever inputs are required to create the final product) *needs* a build system. T -- The day Microsoft makes something that doesn't suck is probably the day they start making vacuum cleaners... -- Slashdotter
Jun 26 2013
parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Wednesday, 26 June 2013 at 23:40:52 UTC, H. S. Teoh wrote:

 Stop right there. As soon as "manual" enters the picture, you 
 no longer
 have a build process. You may have a *caricature* of a build 
 process,
 but it's no build process at all. I don't care if it's hitting 
 F5 or
 running make, if I cannot (check out the code from version 
 control /
 download and unpack the source tarball) and *automatically* 
 recreate the
 entire distribution binary by running a (script / makefile / 
 whatever),
 then it's not a build process.
Whether it needs to be automated to be called a build process is a matter of definitions. The important thing is to agree that it's bad.
 A hand-written document that explains the 50+1 gcc/dmd/whatever 
 commands
 you must type at the command prompt to build the software does 
 not
 qualify as a build system.

 ...

 If you have to manually type anything more than "gcc -o prog 
 prog.c" to
 build a project (and that includes adding compile flags), that 
 project
 has already failed.
I don't consider having to write 50 commands each time you want to build the software that harmful - not because it's good, but because it's so bad that no developer will agree to live with it. And luckily for most developers - that's a problem they don't have to live with, because IDEs can handle it pretty well. The real problem is with commands that you only have to type now and then. For example, let's assume you have a .lex file somewhere in your project. Visual Studio does not no how to handle it(I think - it has been years since I last touched VS, and I didn't do any advanced stuff with it). But VS knows pretty well how to handle everything else, and you don't want to start learning a build system just for that single .lex file - after all, it's just one command, and you don't really need to do it every time - after all, you rarely touch it, and the auto-generated .yy.c file stays in the file system for the next build. So, you use the shell to call `flex`, and then compile your project with VS, and continue coding happily without thinking about that .lex file. A few weeks pass, and you have to change something in the .lex file. So you change it, and compile the code, and run the project, and nothing changes - because you forgot to call `flex`. So you check your code - but everything seems OK. So you do a rebuild - because that usually helps in such situations - and VS deletes the .exe and all the .obj files, but it doesn't delete the .yy.c file - because it's a C source file, so VS assumes it's part of the source code - and then VS compiles everything from scratch - and again nothing changes! So, you do what any sane programmer would do - you throw the computer out of the window. When your new computer arrives, you check out the code from the repository and try to compile, and this time you get a compile error - because you don't have the .yy.c file. Now you finally understand that you forgot to call `flex`! Well, you learned from your mistake so you won't repeat it again, so you say to yourself that there is still no point in introducing a build system just to handle a single .lex file... That's why I'm not worried about problems that you can't live with. If people can't live with a problem - they will find and implement a solution. It's the problems you *can* live with that make me worry - because there will always be people who prefer to live with the problem than to be bothered with the solution...
 If you don't have a build system, your project is already 
 doomed.
 Nevermind auto-generated files, external libraries, or SCMs, 
 those are
 just nails in the coffin.  Any project that spans more than a 
 single
 source file (and I don't mean just code -- that includes data,
 autogenerated files, whatever inputs are required to create the 
 final
 product) *needs* a build system.
With that I don't agree - simple projects that only have source files can get away with IDE building, even if they have multiple source files. I'm talking about zero configuration projects - no auto-generated files, no third party libraries - all you have to do is create a default project in the IDE, import all the source files, and hit F5(or the equivalent shortcut). The moment you have to change a single compiler switch - you need a build system. I myself always use a build system, because I use Vim so I don't have IDE building. The only exception is single-source files of interpreted languages, where I can use shebangs.
Jun 26 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 27, 2013 at 03:00:05AM +0200, Idan Arye wrote:
 On Wednesday, 26 June 2013 at 23:40:52 UTC, H. S. Teoh wrote:
 
Stop right there. As soon as "manual" enters the picture, you no
longer have a build process. You may have a *caricature* of a build
process, but it's no build process at all. I don't care if it's
hitting F5 or running make, if I cannot (check out the code from
version control / download and unpack the source tarball) and
*automatically* recreate the entire distribution binary by running a
(script / makefile / whatever), then it's not a build process.
Whether it needs to be automated to be called a build process is a matter of definitions. The important thing is to agree that it's bad.
No, what I meant was that going from clean source (i.e., only fresh source file, no auto-generated files, no intermediate files, no cached object files, clean, pristine source) to the fully-built binary should be possible *without* manually typing any commands other than invoking the build script / IDE build function / whatever. IOW, builds must be reproducible. They should not rely on arbitrary undocumented commands that the original author typed at arbitrary points in time, that produced intermediate files that are required later. Every single command necessary to start from pristine, unprocessed source code to fully-functional binary must be encapsulated in the build script / build command / whatever you call it, so that, in principle, pushing a single button will produce the final, releasable binary. You should be able to ship the pristine, unprocessed source code to somebody and they should be able to get a binary out of it by "pushing the same button", so to speak. [...]
 I don't consider having to write 50 commands each time you want to
 build the software that harmful - not because it's good, but because
 it's so bad that no developer will agree to live with it. And luckily
 for most developers - that's a problem they don't have to live with,
 because IDEs can handle it pretty well.
 
 The real problem is with commands that you only have to type now and
 then.
That's what I mean. When you have commands that you "only have to type now and then", they MUST be part of the automated build process, be it your build script, IDE project file, or whatever it is you use to build your program. Otherwise, it's not possible for you to just ship the pristine source code to somebody else and have them able to build it just by hitting the "build" button.
 For example, let's assume you have a .lex file somewhere in your
 project. Visual Studio does not no how to handle it(I think - it has
 been years since I last touched VS, and I didn't do any advanced stuff
 with it). But VS knows pretty well how to handle everything else, and
 you don't want to start learning a build system just for that single
 .lex file - after all, it's just one command, and you don't really
 need to do it every time - after all, you rarely touch it, and the
 auto-generated .yy.c file stays in the file system for the next build.
That's the formula for disaster. Consider: 1) You write some code; 2) You decide you need flex, but VS doesn't support calling flex, so you run it by hand; 3) Programmer B wants to try out your code, so you ship him the source files. It fails miserably 'cos he doesn't have flex installed. 4) Solution? Just include the .yy.c the next time you send him the code. Now it compiles. Everything's OK now, right? Wrong. 5) You make some changes to the code, but forget to rerun flex. Now the .yy.c is out of sync with the .lex, but it just happens to still compile, so you ship the new code to programmer B. 6) Programmer B compiles everything and ships the product to the customer. 7) In the meantime, you suddenly remember you didn't re-run flex, so you do that and recompile everything. 8) The customer comes back and complains there are bugs in the code. You can't reproduce it, 'cos your .yy.c is up-to-date now. 9) Another customer complains that the previous release of the code has a critical bug. You check out the old code from version control, but .yy.c wasn't in version control, so the old code doesn't even compile. 10) After hours of hair-pulling, the old code finally compiles. Of course, you've done all sorts of things to try to make it compile, but the dynamic libraries are not the same, the new version of VS has a different default setting, etc., so of course, you can't reproduce the customer's problem. 11) You give up, and check out the new code to continue working on something else. But the .yy.c is again out-of-sync with the .lex 'cos you touched it while trying to make the old version compile. The code compiles, but has subtle bugs caused by the out-of-sync file. 12) After you finally remember to run flex again, programmer B checks out the code, and now his build fails, 'cos the .yy.c is out of sync and causes a compile error. 13) You decide that since the .yy.c keeps causing problems, you should check it into the VCS. Now everything works fine. Or does it? 14) Programmer B checks out the code, and modifies the .lex, but doesn't re-run flex. He checks in the changes. You check out the changes, and now your code doesn't work anymore, 'cos the .yy.c is out of date. See how this is a vicious cycle of endless frustration and wasted time? The correct way of doing things is to include EVERYTHING you need to go from raw source files to final binary in a single build script / project file / whatever. You have to guarantee that, given the pristine source code (i.e. without any externally-generated products), a single button (or script, or makefile, etc.) will be able to regenerate the binaries you shipped. This has to work for EVERY RELEASED VERSION of your program. You should be able to check out any prior version of your code, and be assured that after you hit the "compile" button, the executable you get at the end is IDENTICAL to the executable you shipped to the customer 12 months ago. Anything else is just the formula for endless frustration, untraceable bugs, and project failure. If your IDE's build function doesn't support full end-to-end reproducible builds, it's worthless and should be thrown out.
 So, you use the shell to call `flex`, and then compile your project
 with VS, and continue coding happily without thinking about that .lex
 file.
 
 A few weeks pass, and you have to change something in the .lex file.
 So you change it, and compile the code, and run the project, and
 nothing changes - because you forgot to call `flex`. So you check
 your code - but everything seems OK. So you do a rebuild - because
 that usually helps in such situations - and VS deletes the .exe and
 all the .obj files, but it doesn't delete the .yy.c file - because
 it's a C source file, so VS assumes it's part of the source code -
 and then VS compiles everything from scratch - and again nothing
 changes!
 
 So, you do what any sane programmer would do - you throw the
 computer out of the window.
Or rather, you throw the IDE out the window, 'cos its build function is defective. :-P
 When your new computer arrives, you check out the code from the
 repository and try to compile, and this time you get a compile error -
 because you don't have the .yy.c file. Now you finally understand that
 you forgot to call `flex`!
This is a sign of a defective IDE build function.
 Well, you learned from your mistake so you won't repeat it again, so
 you say to yourself that there is still no point in introducing a
 build system just to handle a single .lex file...
 
 That's why I'm not worried about problems that you can't live with.
 If people can't live with a problem - they will find and implement a
 solution. It's the problems you *can* live with that make me worry -
 because there will always be people who prefer to live with the
 problem than to be bothered with the solution...
Then they only have themselves to blame when they face an endless stream of build problems, heisenbugs that appear/disappear depending on what extra commands you type at the command prompt, inability to track down customer reported bugs in old versions, and all sorts of neat and handy things like that.
If you don't have a build system, your project is already doomed.
Nevermind auto-generated files, external libraries, or SCMs, those
are just nails in the coffin.  Any project that spans more than a
single source file (and I don't mean just code -- that includes data,
autogenerated files, whatever inputs are required to create the final
product) *needs* a build system.
With that I don't agree - simple projects that only have source files can get away with IDE building, even if they have multiple source files. I'm talking about zero configuration projects - no auto-generated files, no third party libraries - all you have to do is create a default project in the IDE, import all the source files, and hit F5(or the equivalent shortcut). The moment you have to change a single compiler switch - you need a build system.
I'd argue that you need a build system from the get-go. Ideally, the IDE's project file SHOULD support such things as building external products. If it doesn't, it's essentially worthless and you should use a real build system instead. But even if this is supported, there's still the problem of compile switches inserted by the IDE that you may not know about. Consider if the IDE has a configuration window where you can select compile switches. You twiddle with some of those settings and later forget about them completely. Then you ship your files to developer B, and he hits the build button and gets a different executable, 'cos his IDE settings don't match yours. This is just the same sad story rehashed. For any serious software project, reproducible builds is a must. There's simply no way around it. Shipping executables that depend on arbitrary IDE settings that vary depending on which developer did it, is a very bad business model. Shipping executables that you cannot reproduce by checking out a previous version of the code from the VCS is a very bad business model. Even *developing* a software for which you can't make reproducible executables is a bad business model -- it hurts programmer productivity. Countless hours are wasted trying to track down bugs and other strange problems that ultimately come from non-reproducible builds. It also hurts morale: nobody dares check out the latest code from the VCS 'cos it has a reputation of introducing random build failures, which wastes time (have to make clean; make, every single time, and if you're dealing with C/C++ where the build times are measured in hours, that just kills productivity instantly). As a result, you get endless merge conflicts when everybody tries to check in their code which has been out-of-sync for weeks, and everybody blames each other for the conflicts ("argh why did you touch this file in *my* subdirectory?!"). Not having 100% reproducible builds is simply not workable.
 I myself always use a build system, because I use Vim so I don't have
 IDE building. The only exception is single-source files of interpreted
 languages, where I can use shebangs.
Single-source files are OK without a build system, though sometimes I still do it, just so I get the compile flags right. For shebangs, it's a different story 'cos you can just put the compile flags into the shebang line. But anything beyond that, requires a *reproducible* build system (even if it's the IDE's build command). Otherwise you're just setting yourself up for needless frustration and failure. T -- Designer clothes: how to cover less by paying more.
Jun 26 2013
parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Thursday, 27 June 2013 at 04:15:27 UTC, H. S. Teoh wrote:
 Anything else is just the formula for endless frustration, 
 untraceable
 bugs, and project failure. If your IDE's build function doesn't 
 support
 full end-to-end reproducible builds, it's worthless and should 
 be
 thrown out.
The IDE's build function is not defective - it's just incomplete. It does what it does well - the problem is that what it does is not enough. Most IDEs I know rely on plugins to do advanced stuff. So if you insist on using the IDE's build function, you'll want to get a flex plugin for your IDE(hopefully there is one...) and that plugin will enhance the IDE's build function to auto-generate the .yy.c file, and if it's a good plugin it'll also enhance the IDE's clean function to delete that file and/or it's SCM interface to ignore that file. It's a shame, really - IDEs could do so much more. A few years build system - it's the build system Visual Studio runs behind the scenes, and it's buildfiles are the .csproj files. Those .csproj files are basically XML. VS makes them very messy, but after you clean them up and understand the format they look pretty much like Ant's build.xml files, and you can use them like a proper build system. I used those .csproj files to automate the build process, the testing, and the deployment. But I could only do it because I broke away from Visual Studio! I doubt it would accept those .csproj files after I cleaned away all the metadata it put there... So people who use VS's build function are actually using a decent build system - but they can't utilize it to it's fullest! VS has menus that allow you to change some paths and switches, but you can't do things like one target that does multiple tasks sequentially. So, Visual Studio uses a build system that could automate our .lex file - it just forgets to give you access to that functionality...
 Or rather, you throw the IDE out the window, 'cos its build 
 function is
 defective. :-P
The IDE is a software - you can't *physically* throw it out the window.
 Then they only have themselves to blame when they face an 
 endless stream
 of build problems, heisenbugs that appear/disappear depending 
 on what
 extra commands you type at the command prompt, inability to 
 track down
 customer reported bugs in old versions, and all sorts of neat 
 and handy
 things like that.
If they work alone on the project, it's their problem. If you need to join that project - now it's your problem as well. Good luck with introducing a build system to an existing project and making everyone use it...
Jun 27 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 27, 2013 at 10:20:59PM +0200, Idan Arye wrote:
 On Thursday, 27 June 2013 at 04:15:27 UTC, H. S. Teoh wrote:
Anything else is just the formula for endless frustration,
untraceable bugs, and project failure. If your IDE's build function
doesn't support full end-to-end reproducible builds, it's worthless
and should be thrown out.
The IDE's build function is not defective - it's just incomplete.
Incomplete == defective. :)
 It does what it does well - the problem is that what it does is not
 enough. Most IDEs I know rely on plugins to do advanced stuff. So if
 you insist on using the IDE's build function, you'll want to get a
 flex plugin for your IDE(hopefully there is one...) and that plugin
 will enhance the IDE's build function to auto-generate the .yy.c file,
 and if it's a good plugin it'll also enhance the IDE's clean function
 to delete that file and/or it's SCM interface to ignore that file.
That's something I never really understood about the Windows / GUI world. The backend functionality is already all there, yet for some strange reason the application refuses to have the means to access that functionality, requiring instead for you to install "plugins". To me, a "plugin" should *enhance* functionality by adding what wasn't there before, but in this case, it seems to be more about removing artificial barriers to reveal what has already been there all along. Same thing goes with the iPhone emoji apps, and many other such examples. As a CLI-only person, I find this really hard to grok.
 It's a shame, really - IDEs could do so much more. A few years ago I

 system - it's the build system Visual Studio runs behind the scenes,
 and it's buildfiles are the .csproj files. Those .csproj files are
 basically XML. VS makes them very messy, but after you clean them up
 and understand the format they look pretty much like Ant's build.xml
 files, and you can use them like a proper build system.
 
 I used those .csproj files to automate the build process, the testing,
 and the deployment. But I could only do it because I broke away from
 Visual Studio! I doubt it would accept those .csproj files after I
 cleaned away all the metadata it put there...
 
 So people who use VS's build function are actually using a decent
 build system - but they can't utilize it to it's fullest! VS has menus
 that allow you to change some paths and switches, but you can't do
 things like one target that does multiple tasks sequentially. So,
 Visual Studio uses a build system that could automate our .lex file -
 it just forgets to give you access to that functionality...
Yeah, this is something I just don't understand with GUI-centric apps. It annoys me a lot, actually, that the necessary functionality is already there, yet there's no way for you to access it without opening the hood. And too often, the hood is welded shut, esp. when you're talking about the Windows world. It reinforces my opinion that GUIs are crippled point-n-grunt caricatures of a *real* UI, which is to use *language* that can convey what you want in much more expressive ways.
Or rather, you throw the IDE out the window, 'cos its build function
is defective. :-P
The IDE is a software - you can't *physically* throw it out the window.
It was a proverbial window, not a physical one. :) Well, either that, or kick it off the GUI window... :-P
Then they only have themselves to blame when they face an endless
stream of build problems, heisenbugs that appear/disappear depending
on what extra commands you type at the command prompt, inability to
track down customer reported bugs in old versions, and all sorts of
neat and handy things like that.
If they work alone on the project, it's their problem. If you need to join that project - now it's your problem as well.
Which is why I would not touch such projects with a 10-foot pole. The world is big enough to have more pleasant projects that I work with.
 Good luck with introducing a build system to an existing project and
 making everyone use it...
Heh, yeah. I've been complaining about the nasty mess that is the Makefile-based build system at my work for a long time, and so far nobody has listened except for my ex-supervisor (who has since left the company, sigh). In fact, it's been going downhill. We *used* to support parallel building. Or at least, some semblance of parallel building, as long as you make sure certain components are separately built in single-threaded mode. It saved many many hours of idle waiting. Ever since the PTBs decided to merge in another major project, though, (which involved recursively copying all files from said other project on top of the existing source tree, and then cleaning up the resulting mess), parallel building has been completely out of the question. Worse yet, that other project's makefiles were (and still are) so nasty, that you couldn't simply re-run make after making some changes; you have to make clean, and then wait 2.5 hours for the miserable thing to build from scratch. If you don't make clean, the build will die halfway with obscure linker errors or errors about missing files, etc.. Gah. What I would give, to convince people to move to a saner build system... But old habits die hard, and people dislike change. What can you do. *shrug* T -- Let's call it an accidental feature. -- Larry Wall
Jun 27 2013
next sibling parent "Craig Dillabaugh" <cdillaba cg.scs.carleton.ca> writes:
On Thursday, 27 June 2013 at 20:43:47 UTC, H. S. Teoh wrote:

clip
 That's something I never really understood about the Windows / 
 GUI
 world. The backend functionality is already all there, yet for 
 some
 strange reason the application refuses to have the means to 
 access that
 functionality, requiring instead for you to install "plugins". 
 To me, a
 "plugin" should *enhance* functionality by adding what wasn't 
 there
 before, but in this case, it seems to be more about removing 
 artificial
 barriers to reveal what has already been there all along. Same 
 thing
 goes with the iPhone emoji apps, and many other such examples.

 As a CLI-only person, I find this really hard to grok.
This isn't directly related to CLI, but one thing I really like about text file based build/configuration systems that I dislike about IDE's is that you can easily add comments to your build scripts/config files to explain why you did something a certain way. This is helpful to you and to anyone else who might have to tweak it later. I also like the sort of inline help that some configuration files provide through the use of comments. I generally find a good, text-based system easier to understand and work with than a GUI based system. For example, with Visual Studio I remember writing down in a separate document that lists of steps to perform in order to link to a particular libraries (installed in non-standard locations) on my system, it involved lots of clicking. Comparatively using QMake (Qt Projects) I just find a .pro file that links to the correct libraries and copy over the relevant lines. Qt has the Qt Creator tool that edits .pro files for you, but most of the time I just edit the .pro files by hand if I want to make changes. Qt Creator seems to be able to deal with this. I am sure Visual Studio has a way of dealing with the problem I've described, but you can't beat copying a few lines from a text file for 'ease of use'.
Jun 27 2013
prev sibling parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Thursday, 27 June 2013 at 20:43:47 UTC, H. S. Teoh wrote:
 That's something I never really understood about the Windows / 
 GUI
 world. The backend functionality is already all there, yet for 
 some
 strange reason the application refuses to have the means to 
 access that
 functionality, requiring instead for you to install "plugins". 
 To me, a
 "plugin" should *enhance* functionality by adding what wasn't 
 there
 before, but in this case, it seems to be more about removing 
 artificial
 barriers to reveal what has already been there all along. Same 
 thing
 goes with the iPhone emoji apps, and many other such examples.

 As a CLI-only person, I find this really hard to grok.
With the popularity of XML build systems, that shouldn't be that hard for an IDE to provide you with a GUI to edit the targets and make complex build processes. I would have expected big IDEs like Eclipse to have that feature...
Jun 27 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 27, 2013 at 11:48:15PM +0200, Idan Arye wrote:
 On Thursday, 27 June 2013 at 20:43:47 UTC, H. S. Teoh wrote:
That's something I never really understood about the Windows / GUI
world. The backend functionality is already all there, yet for some
strange reason the application refuses to have the means to access
that functionality, requiring instead for you to install "plugins".
To me, a "plugin" should *enhance* functionality by adding what
wasn't there before, but in this case, it seems to be more about
removing artificial barriers to reveal what has already been there
all along. Same thing goes with the iPhone emoji apps, and many other
such examples.

As a CLI-only person, I find this really hard to grok.
With the popularity of XML build systems, that shouldn't be that hard for an IDE to provide you with a GUI to edit the targets and make complex build processes. I would have expected big IDEs like Eclipse to have that feature...
Yeah, what with all the fancy code-editing features, you'd think having a built-in XML editor would be easy... XML is a pain to edit by hand, though, if your editor doesn't understand XML. It's sorta like Java and IDEs; in theory, you *can* write Java with nothing but pico, but in practice, it's so verbose that it's only tolerable if you use an IDE with autocompletion. T -- It said to install Windows 2000 or better, so I installed Linux instead.
Jun 27 2013
parent "Idan Arye" <GenericNPC gmail.com> writes:
On Thursday, 27 June 2013 at 21:56:00 UTC, H. S. Teoh wrote:
 On Thu, Jun 27, 2013 at 11:48:15PM +0200, Idan Arye wrote:
 On Thursday, 27 June 2013 at 20:43:47 UTC, H. S. Teoh wrote:
That's something I never really understood about the Windows 
/ GUI
world. The backend functionality is already all there, yet 
for some
strange reason the application refuses to have the means to 
access
that functionality, requiring instead for you to install 
"plugins".
To me, a "plugin" should *enhance* functionality by adding 
what
wasn't there before, but in this case, it seems to be more 
about
removing artificial barriers to reveal what has already been 
there
all along. Same thing goes with the iPhone emoji apps, and 
many other
such examples.

As a CLI-only person, I find this really hard to grok.
With the popularity of XML build systems, that shouldn't be that hard for an IDE to provide you with a GUI to edit the targets and make complex build processes. I would have expected big IDEs like Eclipse to have that feature...
Yeah, what with all the fancy code-editing features, you'd think having a built-in XML editor would be easy... XML is a pain to edit by hand, though, if your editor doesn't understand XML. It's sorta like Java and IDEs; in theory, you *can* write Java with nothing but pico, but in practice, it's so verbose that it's only tolerable if you use an IDE with autocompletion. T
I'm not talking about an GUI XML editor here - I'm talking about a buildfile editor. One that is familiar with the common tasks and let you edit them with a GUI menu. Ofcourse, it should also be expandable with plugins to deal third party tasks, and have a way to deal generic tasks, but having a GUI for the common tasks is the key. Eclipse already have an textual XML editor, and it has autocompletion for Ant's build.xml, but it's much easier to configure an Eclipse run configuration with a configuration menu than to edit the Ant target with a text editor, so most users will prefer to use the IDE's build function. It's also important that the IDE will use this build system by default. When you open a new project, it should automatically create a buildfile, and only store in it data that needs to be shared between developers. If the IDE needs to store other data that is only relevant locally, it should be in a separate, unversioned file - otherwise this data will create redundant, hard-to-solve merge conflicts. The IDE needs to have both features: If the IDE does not use the proper build system by default, most developers will use the default build function, and when they reach it's limit they'll have a hard time switching to the proper build system - and many might choose not to switch, and will use bad solutions that introduce technical debt. If the IDE does not have a GUI configuration tool people will simply not use the IDE. IDE users don't like having to edit the build configuration with a text editor - not when most IDEs have nice GUI for it. People who prefer the powerful edit-by-text build system over the crippled edit-by-GUI one usually prefer text editors over IDEs anyways.
Jun 27 2013