www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - What are the worst parts of D?

reply "Tofu Ninja" <emmons0 purdue.edu> writes:
There was a recent video[1] by Jonathan Blow about what he would 
want in a programming language designed specifically for game 
development. Go, Rust, and D were mentioned and his reason for 
not wanting to use D is is that it is "too much like C++" 
although he does not really go into it much and it was a very 
small part of the video it still brings up some questions.

What I am curious is what are the worst parts of D? What sort of 
things would be done differently if we could start over or if we 
were designing a D3? I am not asking to try and bash D but 
because it is helpful to know what's bad as well as good.

I will start off...
GC by default is a big sore point that everyone brings up
"is" expressions are pretty wonky
Libraries could definitely be split up better

What do you think are the worst parts of D?

[1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
Sep 20 2014
next sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
1. The whining in the forums. 2. Lacks focus on a dedicated application area. 3. No strategy for getting more people on board. 4. No visible roadmap. 5. Too much focus on retaining C semantics (go does a bit better) 6. Inconsistencies and hacks (too many low hanging fruits) 7. More hacks are being added rather than removing existing ones. 8. Not enough performance oriented process. 9. It's mysteriously addictive and annoying at the same time. 10. It's contagious and now I'm in bed with a cold.
Sep 20 2014
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 20 September 2014 at 13:30:24 UTC, Ola Fosheim 
Grostad wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:
 What do you think are the worst parts of D?
1. The whining in the forums. 2. Lacks focus on a dedicated application area. 3. No strategy for getting more people on board. 4. No visible roadmap.
Not really a problem with the language. Just problems.
 5. Too much focus on retaining C semantics (go does a bit 
 better)

 6. Inconsistencies and hacks (too many low hanging fruits)

 7. More hacks are being added rather than removing existing 
 ones.
Definitely can agree, I think it has to do with the sentiment that it is "too much like C++"
 8. Not enough performance oriented process.
Not sure what you are saying, are you saying there is not a big enough focus on performance?
 9. It's mysteriously addictive and annoying at the same time.
Is that a problem?
 10. It's contagious and now I'm in bed with a cold.
:<
Sep 20 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 20 September 2014 at 14:22:32 UTC, Tofu Ninja wrote:
 Not really a problem with the language. Just problems.
It is kind of interlinked in a world that keeps moving forward. I found myself agreeing (or at least empathising) with a lot of what Jonathan Blow said. Of course, since his presentation was laid-back the people on reddit kind of attacked him and who knows, maybe he lost inspiration. He did at least respond on twitter. And his language project probably depends on his next game Witness (which sounds cool) to succeed. Anyway, I think he got the right take on it, reach out to other devs in his own sector and ask them about their practice, then tailor a language with little syntactical overhead for that use scenario. Of course, it won't fly if he doesn't manage to attract people who are more into the semantics of computer languages, but I root for him anyway. I like his attitude. On a related note I also read somewhere that Carmack is looking at GC for the gameplay data. Basically only a heap scanning, but compacting GC, that can run per frame. Seems the game logic usually fits in 5MB, so it might work.
 Definitely can agree, I think it has to do with the sentiment 
 that it is "too much like C++"
Yes, I think Jonathan got that part right. I guess also that any kind of "unique traits" that feels like "inventions" will be eagerly picked up and hold up as good ideas by enthusiasts. Even if they are just special cases of more general constructs or variations of existing concepts posing under a new name. Perhaps an important aspect of the sociology of computer languages. (Lispers tend to be terribly proud of their language of choice :)
 8. Not enough performance oriented process.
Not sure what you are saying, are you saying there is not a big enough focus on performance?
I think there is too much focus on features both in language and library. I'd personally prefer smaller and more benchmark focused. It is better to be very good at something limited, IMO. I also think that the big win in the coming years come for the language that most successfully can make elegant low overhead access to SIMD instructions without having to resort to intrinsics. I have no idea what the syntax would be, but that seems to be the most promising area of language design in terms of performance IMO.
Sep 20 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Jonathan Blow has expressed himself about D before…

http://www.kotaku.com.au/2012/05/most-popular-video-games-are-dumb-can-we-stop-apologising-for-them-now/

It could be incidental, of course.
Sep 20 2014
prev sibling parent reply "AsmMan" <jckj33 gmail.com> writes:
On Saturday, 20 September 2014 at 14:22:32 UTC, Tofu Ninja wrote:
 On Saturday, 20 September 2014 at 13:30:24 UTC, Ola Fosheim 
 Grostad wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:
 What do you think are the worst parts of D?
1. The whining in the forums. 2. Lacks focus on a dedicated application area. 3. No strategy for getting more people on board. 4. No visible roadmap.
Not really a problem with the language. Just problems.
 5. Too much focus on retaining C semantics (go does a bit 
 better)

 6. Inconsistencies and hacks (too many low hanging fruits)

 7. More hacks are being added rather than removing existing 
 ones.
Definitely can agree, I think it has to do with the sentiment that it is "too much like C++"
It's really needed to keep C++-compatible as possible otherwise too few people are going to use it. If C++ wasn't C-compatible do you think it would be a successfully language it is today? I don't think so.
Sep 22 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 22 Sep 2014 14:28:47 +0000
AsmMan via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 It's really needed to keep C++-compatible as possible otherwise=20
 too few people are going to use it. If C++ wasn't C-compatible do=20
 you think it would be a successfully language it is today? I=20
 don't think so.
D is not c++-compatible anyway. and talking about compatibility: it's what made c++ such a monster. if someone wants c++ he knows where to download c++ compiler. the last thing D should look at is c++.
Sep 22 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/22/14, 1:44 PM, ketmar via Digitalmars-d wrote:
 On Mon, 22 Sep 2014 14:28:47 +0000
 AsmMan via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 It's really needed to keep C++-compatible as possible otherwise
 too few people are going to use it. If C++ wasn't C-compatible do
 you think it would be a successfully language it is today? I
 don't think so.
D is not c++-compatible anyway.
D is ABI- and mangling-compatible with C++.
 and talking about compatibility: it's
 what made c++ such a monster. if someone wants c++ he knows where to
 download c++ compiler.

 the last thing D should look at is c++.
Well what can I say? I'm glad you're not making the decisions. Andrei
Sep 22 2014
next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 22 Sep 2014 16:14:28 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 D is not c++-compatible anyway.
D is ABI- and mangling-compatible with C++.
but we were talking about syntactic compatibility.
 Well what can I say? I'm glad you're not making the decisions.
i HATE c++. i want it to DIE, to disappear completely, with all the code written in it. so yes, it's good to D that i can't freely mess with mainline codebase. 'cause the first thing i'll do with it is destroying any traces of c++ interop. the world will be a better place without c++.
Sep 22 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 23 September 2014 at 01:39:00 UTC, ketmar via
Digitalmars-d wrote:
 On Mon, 22 Sep 2014 16:14:28 -0700
 Andrei Alexandrescu via Digitalmars-d 
 <digitalmars-d puremagic.com>
 wrote:

 D is not c++-compatible anyway.
D is ABI- and mangling-compatible with C++.
but we were talking about syntactic compatibility.
 Well what can I say? I'm glad you're not making the decisions.
i HATE c++. i want it to DIE, to disappear completely, with all the code written in it. so yes, it's good to D that i can't freely mess with mainline codebase. 'cause the first thing i'll do with it is destroying any traces of c++ interop. the world will be a better place without c++.
If you hate C++, you shouldn't have too much trouble to understand that offering a way out for people using C++ is key.
Sep 22 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 23 Sep 2014 01:45:31 +0000
deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 If you hate C++, you shouldn't have too much trouble to
 understand that offering a way out for people using C++ is key.
but there is! D is perfectly able to replace c++. ah, i know, there is alot of legacy c++ code and people can't just rewrite it in D. so... so bad for that people then.
Sep 22 2014
prev sibling next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Sep 23, 2014 at 04:38:51AM +0300, ketmar via Digitalmars-d wrote:
 On Mon, 22 Sep 2014 16:14:28 -0700
 Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
 wrote:
 
 D is not c++-compatible anyway.
D is ABI- and mangling-compatible with C++.
but we were talking about syntactic compatibility.
 Well what can I say? I'm glad you're not making the decisions.
i HATE c++. i want it to DIE, to disappear completely, with all the code written in it. so yes, it's good to D that i can't freely mess with mainline codebase. 'cause the first thing i'll do with it is destroying any traces of c++ interop. the world will be a better place without c++.
For a moment, I read that as you'll destroy any traces of C++, so the first thing that would go is the DMD source code. :-P T -- Shin: (n.) A device for finding furniture in the dark.
Sep 22 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 22 Sep 2014 19:16:27 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 For a moment, I read that as you'll destroy any traces of C++, so the
 first thing that would go is the DMD source code. :-P
but we have magicport! well, almost... i'll postpone c++ destruction until magicport will be complete and working. ;-)
Sep 22 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
The lack of clear direction or communication thereof.  A 
continual adding of new stuff to try and appease the theoretical 
masses who will certainly come flocking to D if implemented, and 
a lack of attention paid to tightening up what we've already got 
and deprecating old stuff that no one wants any more.  And 
inconsistency in how things work in the language.  Oh, and 
function attributes.  I'm sure someone likes them, but I'm 
drowning in pure system const immutable  nogc  illegitemate  wtf 
hell.
Sep 23 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 7:29 AM, Sean Kelly wrote:
 The lack of clear direction or communication thereof.
* C++ compatibility * Everything GC-related Probably a distant third is improving build tooling. But those two are more important that everything else by an order of magnitude. Andrei
Sep 23 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Tuesday, 23 September 2014 at 15:47:21 UTC, Andrei 
Alexandrescu wrote:
 On 9/23/14, 7:29 AM, Sean Kelly wrote:
 The lack of clear direction or communication thereof.
* C++ compatibility * Everything GC-related Probably a distant third is improving build tooling. But those two are more important that everything else by an order of magnitude.
Well yeah, but that's just the current clear direction. Who knows what it will be next week.
Sep 23 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 9:06 AM, Sean Kelly wrote:
 On Tuesday, 23 September 2014 at 15:47:21 UTC, Andrei Alexandrescu wrote:
 On 9/23/14, 7:29 AM, Sean Kelly wrote:
 The lack of clear direction or communication thereof.
* C++ compatibility * Everything GC-related Probably a distant third is improving build tooling. But those two are more important that everything else by an order of magnitude.
Well yeah, but that's just the current clear direction. Who knows what it will be next week.
It's been this for a good while, and it will probably be until done. -- Andrei
Sep 23 2014
next sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Tuesday, 23 September 2014 at 16:19:31 UTC, Andrei 
Alexandrescu wrote:
 On 9/23/14, 9:06 AM, Sean Kelly wrote:
 On Tuesday, 23 September 2014 at 15:47:21 UTC, Andrei 
 Alexandrescu wrote:
 On 9/23/14, 7:29 AM, Sean Kelly wrote:
 The lack of clear direction or communication thereof.
* C++ compatibility Probably a distant third is improving build tooling. But those two are more important that everything else by an order of magnitude.
Well yeah, but that's just the current clear direction. Who knows what it will be next week.
It's been this for a good while, and it will probably be until done. -- Andrei
Here at work I'm toying with C++ compatibility right now: If it's viable, I would like to use D instead of C++ for a cloud tool that must link with C++ computer vision libraries... Right now it sounds promising, so this feature could really be very interesting, not only to facilitate the integration with existing in-house codebase, but also for brand new projects. I'm starting to think that there will be a lot of buzz and fuss about D as soon as good bindings to popular C++ libs will appear in the wild... --- /Paolo
Sep 23 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 9:40 AM, Paolo Invernizzi wrote:
 I'm starting to think that there will be a lot of buzz and fuss about D
 as soon as good bindings to popular C++ libs will appear in the wild...
Yah, and core.stdcpp will be quite the surprise. -- Andrei
Sep 23 2014
parent "Atila Neves" <atila.neves gmail.com> writes:
On Tuesday, 23 September 2014 at 16:50:26 UTC, Andrei 
Alexandrescu wrote:
 On 9/23/14, 9:40 AM, Paolo Invernizzi wrote:
 I'm starting to think that there will be a lot of buzz and 
 fuss about D
 as soon as good bindings to popular C++ libs will appear in 
 the wild...
Yah, and core.stdcpp will be quite the surprise. -- Andrei
Really?? Wow. Awesome! Atila
Sep 23 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 23/09/14 18:19, Andrei Alexandrescu wrote:

 It's been this for a good while, and it will probably be until done. --
 Andrei
So why isn't there a publicly available road map? Note, this one [1] doesn't mention C++ nor the GC. [1] http://wiki.dlang.org/Agenda -- /Jacob Carlborg
Sep 23 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 11:13 PM, Jacob Carlborg wrote:
 On 23/09/14 18:19, Andrei Alexandrescu wrote:

 It's been this for a good while, and it will probably be until done. --
 Andrei
So why isn't there a publicly available road map? Note, this one [1] doesn't mention C++ nor the GC. [1] http://wiki.dlang.org/Agenda
Could you please update it? C++ and GC. C++ and GC. Thanks. -- Andrei
Sep 23 2014
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 * C++ compatibility
 * Everything GC-related

 Probably a distant third is improving build tooling. But those 
 two are more important that everything else by an order of 
 magnitude.
In parallel there are other things like ddmd, checked ints in core library, perhaps to finish shared libs, to test the patch from Kenji that fixes the module system, and more. Bye, bearophile
Sep 23 2014
prev sibling next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:
 […] and a lack of attention paid to tightening up what we've 
 already got and deprecating old stuff that no one wants any 
 more.
This. The hypocritical fear of making breaking changes (the fact that not all of them are bad has been brought up over and over again by some of the corporate users) is crippling us, making D a much more cluttered language than necessary. Seriously, once somebody comes up with an automatic fixup tool, there is hardly any generic argument left against language changes. Sure, there will always be some cases where manual intervention is still required, such as with string mixins. But unless we have lost hope that the D community is still to grow significantly, I don't see why the burden of proof should automatically lie on the side of those in favor of cleaning up cruft and semantical quirks. Most D code is still to be written. David
Sep 23 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 23 Sep 2014 18:32:39 +0000
David Nadlinger via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Seriously, once somebody comes up with an automatic fixup tool,=20
i bet nobody will. for many various reasons.
Sep 23 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 11:32 AM, David Nadlinger wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:
 […] and a lack of attention paid to tightening up what we've already
 got and deprecating old stuff that no one wants any more.
This. The hypocritical fear of making breaking changes (the fact that not all of them are bad has been brought up over and over again by some of the corporate users) is crippling us, making D a much more cluttered language than necessary. Seriously, once somebody comes up with an automatic fixup tool, there is hardly any generic argument left against language changes. Sure, there will always be some cases where manual intervention is still required, such as with string mixins. But unless we have lost hope that the D community is still to grow significantly, I don't see why the burden of proof should automatically lie on the side of those in favor of cleaning up cruft and semantical quirks. Most D code is still to be written.
Well put. Again, the two things we need to work on are C++ compatibility and the GC. -- Andrei
Sep 23 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Tuesday, 23 September 2014 at 18:38:08 UTC, Andrei 
Alexandrescu wrote:
 Well put. Again, the two things we need to work on are C++ 
 compatibility and the GC. -- Andrei
Has much thought gone into how we'll address C++ const?
Sep 23 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 12:01 PM, Sean Kelly wrote:
 On Tuesday, 23 September 2014 at 18:38:08 UTC, Andrei Alexandrescu wrote:
 Well put. Again, the two things we need to work on are C++
 compatibility and the GC. -- Andrei
Has much thought gone into how we'll address C++ const?
Some. A lot more needs to. -- Andrei
Sep 23 2014
parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 23/09/2014 20:05, Andrei Alexandrescu wrote:
 On 9/23/14, 12:01 PM, Sean Kelly wrote:
 On Tuesday, 23 September 2014 at 18:38:08 UTC, Andrei Alexandrescu wrote:
 Well put. Again, the two things we need to work on are C++
 compatibility and the GC. -- Andrei
Has much thought gone into how we'll address C++ const?
Some. A lot more needs to. -- Andrei
The resurrection of head-const? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Sep 30 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Sep 23, 2014 at 07:01:05PM +0000, Sean Kelly via Digitalmars-d wrote:
 On Tuesday, 23 September 2014 at 18:38:08 UTC, Andrei Alexandrescu wrote:
Well put. Again, the two things we need to work on are C++
compatibility and the GC. -- Andrei
Has much thought gone into how we'll address C++ const?
Is that even addressable?? D const is fundamentally different from C++ const. Short of introducing logical const into D, I don't see how we could bridge the gap. T -- It is the quality rather than the quantity that matters. -- Lucius Annaeus Seneca
Sep 23 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Tuesday, 23 September 2014 at 19:10:07 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Tue, Sep 23, 2014 at 07:01:05PM +0000, Sean Kelly via 
 Digitalmars-d wrote:
 On Tuesday, 23 September 2014 at 18:38:08 UTC, Andrei 
 Alexandrescu wrote:
Well put. Again, the two things we need to work on are C++
compatibility and the GC. -- Andrei
Has much thought gone into how we'll address C++ const?
Is that even addressable?? D const is fundamentally different from C++ const. Short of introducing logical const into D, I don't see how we could bridge the gap.
I haven't really thought about it, but something could probably be made to work with type wrappers that do implicit casting plus just pretending that const is the same like we do with our C interfaces. I'm also wondering how we'd handle something like: struct S { virtual int foo() {...} }; std::map<int,S> m; We'd have to make S a value type in D, so struct, but D struct does't allow virtual functions. Maybe something weird with in-place construction of classes? I suspect the more we look into C++ compatibility the more problems we'll find, and actually interfacing with most C++ code worth using will result in terrifying D code. But I hope I'm wrong since C++ support is apparently now where all of our effort is being devoted (mark me down as being completely uninterested in this feature despite using C/C++ at work).
Sep 23 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Sep 23, 2014 at 07:50:38PM +0000, Sean Kelly via Digitalmars-d wrote:
 On Tuesday, 23 September 2014 at 19:10:07 UTC, H. S. Teoh via Digitalmars-d
 wrote:
On Tue, Sep 23, 2014 at 07:01:05PM +0000, Sean Kelly via Digitalmars-d
wrote:
[...]
Has much thought gone into how we'll address C++ const?
Is that even addressable?? D const is fundamentally different from C++ const. Short of introducing logical const into D, I don't see how we could bridge the gap.
I haven't really thought about it, but something could probably be made to work with type wrappers that do implicit casting plus just pretending that const is the same like we do with our C interfaces. I'm also wondering how we'd handle something like: struct S { virtual int foo() {...} }; std::map<int,S> m; We'd have to make S a value type in D, so struct, but D struct does't allow virtual functions. Maybe something weird with in-place construction of classes?
Or turn them into function pointers / member delegates? But that doesn't work well with ABI compatibility.
 I suspect the more we look into C++ compatibility the more problems
 we'll find,
SFINAE is another dark corner of disaster waiting to happen, once we decide to implement C++ template compatibility. As well as Koenig lookup, which will become indispensible if D code is to actually use non-trivial C++ libraries.
 and actually interfacing with most C++ code worth using will result in
 terrifying D code.  But I hope I'm wrong since C++ support is
 apparently now where all of our effort is being devoted (mark me down
 as being completely uninterested in this feature despite using C/C++
 at work).
Yeah, I can't say I'm exactly thrilled about being able to call C++ code from D. I suppose it's a nice-to-have, but I'm not sure how well that's gonna work in practice, given the fundamental differences between D and C++. But be that as it may, if we're serious about cross-linguistic ABI compatibility, then we better start with a solid design of how exactly said interfacing is going to happen in a way that fits in well with how D works. Cowboying our way through piecemeal (i.e., ad hoc addition of compatibilities like adding C++ class support, then C++ templates, then SFINAE in extern(c++), then ...) isn't going to cut it. We might end up reinventing C++, even more poorly than C++ already is. T -- Everybody talks about it, but nobody does anything about it! -- Mark Twain
Sep 23 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 23 September 2014 at 20:22:32 UTC, H. S. Teoh via
Digitalmars-d wrote:
 SFINAE is another dark corner of disaster waiting to happen, 
 once we
 decide to implement C++ template compatibility. As well as 
 Koenig
 lookup, which will become indispensible if D code is to 
 actually use
 non-trivial C++ libraries.
We don't need these to be compatible with C++. We don't want to be able to cut/paste C++ into a . file and expect it to compile, but that you can map a reasonable amount of C++ constructs and expect them to interact with the C++ code and back.
Sep 23 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 4:25 PM, deadalnix wrote:
 On Tuesday, 23 September 2014 at 20:22:32 UTC, H. S. Teoh via
 Digitalmars-d wrote:
 SFINAE is another dark corner of disaster waiting to happen, once we
 decide to implement C++ template compatibility. As well as Koenig
 lookup, which will become indispensible if D code is to actually use
 non-trivial C++ libraries.
We don't need these to be compatible with C++. We don't want to be able to cut/paste C++ into a . file and expect it to compile, but that you can map a reasonable amount of C++ constructs and expect them to interact with the C++ code and back.
Yah, that's exactly it. Syntax and semantics stay D; the functions called may be C++. -- Andrei
Sep 23 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Sep 23, 2014 at 11:25:52PM +0000, deadalnix via Digitalmars-d wrote:
 On Tuesday, 23 September 2014 at 20:22:32 UTC, H. S. Teoh via
 Digitalmars-d wrote:
SFINAE is another dark corner of disaster waiting to happen, once we
decide to implement C++ template compatibility. As well as Koenig
lookup, which will become indispensible if D code is to actually use
non-trivial C++ libraries.
We don't need these to be compatible with C++. We don't want to be able to cut/paste C++ into a . file and expect it to compile, but that you can map a reasonable amount of C++ constructs and expect them to interact with the C++ code and back.
You *will* need SFINAE if you expect to interface C++ template libraries with D. Imagine that an existing codebase is using some C++ template library that depends on SFINAE. You'd like to start migrating to D, so you start writing new code in D. Eventually you need to make use of the C++ template library in order to interface with the C++ parts of the code, so you write a .di that declares template functions in an extern(c++) block. It works... some of the time. Other times you start getting weird errors or the wrong functions get called, because the C++ template library was written with SFINAE in mind, but D doesn't have that. So at the end of the day, it's a gigantic mess, and you go crawling back to C++. Unless, of course, we draw the line at templates and say that we won't support template compatibility with C++ (and I'd fully support that decision!). But that means we throw all C++ template libraries out the window, and any C++ codebase that makes heavy use of a template library will have to be rewritten from scratch in D. As for Koenig lookup, you might run into problems if you declare C++ wrappers for D functions in the C++ part of the codebase, and suddenly the wrong D functions are getting called due to Koenig lookup in C++ which wasn't considered when the D part of the code was written. T -- Без труда не выловишь и рыбку из пруда.
Sep 23 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 5:06 PM, H. S. Teoh via Digitalmars-d wrote:
 You *will* need SFINAE if you expect to interface C++ template libraries
 with D.
Nope. -- Andrei
Sep 23 2014
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 24 September 2014 at 00:08:19 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 You *will* need SFINAE if you expect to interface C++ template 
 libraries
 with D. Imagine that an existing codebase is using some C++ 
 template
 library that depends on SFINAE. You'd like to start migrating 
 to D, so
 you start writing new code in D. Eventually you need to make 
 use of the
 C++ template library in order to interface with the C++ parts 
 of the
 code, so you write a .di that declares template functions in an
 extern(c++) block. It works...  some of the time. Other times 
 you start
 getting weird errors or the wrong functions get called, because 
 the C++
 template library was written with SFINAE in mind, but D doesn't 
 have
 that. So at the end of the day, it's a gigantic mess, and you go
 crawling back to C++.
I think you can support a large part of C++ template without SFINAE. It is not that common and only matter for the binding if it changes the interface or the layout of something. If one want to map these, it can be done with some static if magic. But I'm fairly confident that it won't even be necessary is most situations.
Sep 23 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 24 September 2014 at 00:08:19 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 You *will* need SFINAE if you expect to interface C++ template 
 libraries
 with D. Imagine that an existing codebase is using some C++ 
 template
 library that depends on SFINAE. You'd like to start migrating 
 to D, so
 you start writing new code in D. Eventually you need to make 
 use of the
 C++ template library in order to interface with the C++ parts 
 of the
 code, so you write a .di that declares template functions in an
 extern(c++) block. It works...  some of the time. Other times 
 you start
 getting weird errors or the wrong functions get called, because 
 the C++
 template library was written with SFINAE in mind, but D doesn't 
 have
 that. So at the end of the day, it's a gigantic mess, and you go
 crawling back to C++.

 Unless, of course, we draw the line at templates and say that 
 we won't
 support template compatibility with C++ (and I'd fully support 
 that
 decision!). But that means we throw all C++ template libraries 
 out the
 window, and any C++ codebase that makes heavy use of a template 
 library
 will have to be rewritten from scratch in D.
I'm not a C++ guru, but it looks like SFINAE exists for simplicity, so that templates can be matched without template constraints and reflection. This looks equivalent to D template constraints. If template doesn't work for some parameters, just filter them out.
Sep 24 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 6:06 AM, Kagamin wrote:
 I'm not a C++ guru, but it looks like SFINAE exists for simplicity, so
 that templates can be matched without template constraints and
 reflection. This looks equivalent to D template constraints. If template
 doesn't work for some parameters, just filter them out.
That's right. The enable_if etc. stuff is still part of the type (and therefore the mangling), but the selection itself can be trivially done on the D side with template constraints. No worries, we know how to do it. -- Andrei
Sep 24 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 23/09/14 20:32, David Nadlinger wrote:

 Seriously, once somebody comes up with an automatic fixup tool, there is
 hardly any generic argument left against language changes.
Brain has already said that such a tool is fairly easy to create in many cases. Also that he is willing do to so if it will be used. But so far neither Andrei or Walter have shown any signs of willing to break code that can be fixed with a tool like this. I can understand that Brian doesn't want to create such a tool if it's not going to be used. -- /Jacob Carlborg
Sep 23 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 11:16 PM, Jacob Carlborg wrote:
 On 23/09/14 20:32, David Nadlinger wrote:

 Seriously, once somebody comes up with an automatic fixup tool, there is
 hardly any generic argument left against language changes.
Brain has already said that such a tool is fairly easy to create in many cases. Also that he is willing do to so if it will be used. But so far neither Andrei or Walter have shown any signs of willing to break code that can be fixed with a tool like this. I can understand that Brian doesn't want to create such a tool if it's not going to be used.
Some breakage will going to happen even though we're increasingly conservative. So yes, having a tool is nice. Andrei
Sep 23 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 23 Sep 2014 14:29:05 +0000
Sean Kelly via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 function attributes.  I'm sure someone likes them, but I'm=20
 drowning in pure system const immutable  nogc  illegitemate  wtf=20
 hell.
and 'const' is such overpowered that it's barely usable on methods and struct/class fields.
Sep 23 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 7:29 AM, Sean Kelly wrote:
 The lack of clear direction or communication thereof.  A continual adding of
new
 stuff to try and appease the theoretical masses who will certainly come
flocking
 to D if implemented, and a lack of attention paid to tightening up what we've
 already got and deprecating old stuff that no one wants any more.
I find this hard to reconcile with what the changelog says.
 And inconsistency in how things work in the language.  Oh, and function
attributes.
 I'm sure someone likes them, but I'm drowning in pure system const immutable
  nogc  illegitemate  wtf hell.
Fortunately, those attributes are inferred for template functions. I did try to extend that to auto functions, but got a lot of resistance.
Sep 23 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 24 September 2014 at 03:44:52 UTC, Walter Bright 
wrote:
 On 9/23/2014 7:29 AM, Sean Kelly wrote:
 The lack of clear direction or communication thereof.  A 
 continual adding of new
 stuff to try and appease the theoretical masses who will 
 certainly come flocking
 to D if implemented, and a lack of attention paid to 
 tightening up what we've
 already got and deprecating old stuff that no one wants any 
 more.
I find this hard to reconcile with what the changelog says.
There's clearly been a lot of attention paid to bug fixes. But for the rest... I feel like the overall direction is towards whatever is currently thought to gain the most new users. The thing is that D has already *got* me. What I want is for the language I've already got to be polished until I can use it in a freaking space telescope. I'm sick of "yes but" languages. Every time I hit an obstacle in D I think "oh great, D is way behind other languages in all these ways and D itself is broken to boot. Why am I using this again?" And it could be a tiny thing. It doesn't matter. Every little issue like that is magnified a thousandfold because D is already such a hard sell. So in that respect I understand the push for C++ support because that's the speed bump that Andrei has hit. But here's the thing... by pursuing this we're effectively focusing all of our efforts *on another language*. And we're doing so when D itself still needs a lot of work. Maybe not in any truly immense ways, but as I said before, those tiny things can seem huge when you're already struggling to justify just using the language at all. Maybe all this will pull together into a cohesive whole, but so far it feels kind of disconnected. So that's part of what I meant by "tightening up."
 And inconsistency in how things work in the language.  Oh, and 
 function attributes.
 I'm sure someone likes them, but I'm drowning in pure system 
 const immutable
  nogc  illegitemate  wtf hell.
Fortunately, those attributes are inferred for template functions. I did try to extend that to auto functions, but got a lot of resistance.
Yes, the inference is very nice. And I do see the use for each attribute. It's just... when I look at a function and there's a line of attributes before the function declaration that have nothing to do with what the function actually does but rather with how it's implemented, it's just syntactic noise. It's information for the compiler, not me as a user. I hope we'll eventually get to the point where everything is inferred and the attributes disappear entirely.
Sep 23 2014
next sibling parent reply Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 9/23/2014 9:46 PM, Sean Kelly via Digitalmars-d wrote:
 There's clearly been a lot of attention paid to bug fixes.  But for the
 rest... I feel like the overall direction is towards whatever is
 currently thought to gain the most new users.  The thing is that D has
 already *got* me.  What I want is for the language I've already got to
 be polished until I can use it in a freaking space telescope.  I'm sick
 of "yes but" languages. Every time I hit an obstacle in D I think "oh
 great, D is way behind other languages in all these ways and D itself is
 broken to boot.  Why am I using this again?"  And it could be a tiny
 thing.  It doesn't matter.  Every little issue like that is magnified a
 thousandfold because D is already such a hard sell.
I agree with Sean quite a bit here. Let's turn the camera around and look at it from a different angle. I'm hard pressed to find a new feature from the last few years that's actually thoroughly complete. And by complete I mean that druntime and phobos use it everywhere it should be used. Shared libraries? nope. Any of the new attributes? nope. 64 bit support? nope. const? shared? cleaning up object? .. nope. And that's not even getting into the big gaps that exist. I understand quite thoroughly why c++ support is a big win, or will be, but the Oh Shiny focus is pretty discouraging for me as well. This isn't meant to say the c++ work shouldn't be done, but to point out that the shifting focus is a real problem.
Sep 23 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 24/09/14 06:59, Brad Roberts via Digitalmars-d wrote:

 I agree with Sean quite a bit here.

 Let's turn the camera around and look at it from a different angle.  I'm
 hard pressed to find a new feature from the last few years that's
 actually thoroughly complete.  And by complete I mean that druntime and
 phobos use it everywhere it should be used.

 Shared libraries?  nope.
 Any of the new attributes?  nope.
 64 bit support?  nope.
 const?
 shared?
 cleaning up object?

 .. nope.

 And that's not even getting into the big gaps that exist.
I completely agree. Lets focus on the D users we actually have, not some imaginary C++ users that will come running as soon as there is enough C++ support. -- /Jacob Carlborg
Sep 23 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 11:22 PM, Jacob Carlborg wrote:
 On 24/09/14 06:59, Brad Roberts via Digitalmars-d wrote:

 I agree with Sean quite a bit here.

 Let's turn the camera around and look at it from a different angle.  I'm
 hard pressed to find a new feature from the last few years that's
 actually thoroughly complete.  And by complete I mean that druntime and
 phobos use it everywhere it should be used.

 Shared libraries?  nope.
 Any of the new attributes?  nope.
 64 bit support?  nope.
 const?
 shared?
 cleaning up object?

 .. nope.

 And that's not even getting into the big gaps that exist.
I completely agree. Lets focus on the D users we actually have, not some imaginary C++ users that will come running as soon as there is enough C++ support.
Those are very real. I know this for a fact. -- Andrei
Sep 23 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 23 Sep 2014 23:24:21 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 I completely agree. Lets focus on the D users we actually have, not
 some imaginary C++ users that will come running as soon as there is
 enough C++ support.
Those are very real. I know this for a fact. -- Andrei
all three of them.
Sep 24 2014
next sibling parent reply "Meta" <jared771 gmail.com> writes:
On Wednesday, 24 September 2014 at 07:41:48 UTC, ketmar via 
Digitalmars-d wrote:
 all three of them.
You forget that D is now actively used at Facebook, and better C++ interop would allow them to slowly phase out more and more C++ code. The more Facebook uses D, the more support it will provide. Not to mention the social capital that D would gain from the fact that it's heavily used and supported at Facebook. It's not even really a question of whether C++ support should be worked on or not, in my opinion.
Sep 24 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 07:59:40 +0000
Meta via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 You forget that D is now actively used at Facebook
no, i'm not. i just can't see why facebook priorities should be D priorities. facebook needs c++ interop? ok, they can hire alot of programmers to write this. *not* Walter and Andrei. and not dragging that into primary targets for D.
 not even really a question of whether C++ support should be=20
 worked on or not, in my opinion.
i'm not against c++ interop, i'm just against making it high-priority task. it's not something that we *must* have, it's just a "good-to-have feature", nothing more.
Sep 24 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 1:08 AM, ketmar via Digitalmars-d wrote:
 On Wed, 24 Sep 2014 07:59:40 +0000
 Meta via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 You forget that D is now actively used at Facebook
no, i'm not. i just can't see why facebook priorities should be D priorities. facebook needs c++ interop? ok, they can hire alot of programmers to write this. *not* Walter and Andrei. and not dragging that into primary targets for D.
Your guidance of my career is uncalled for.
 not even really a question of whether C++ support should be
 worked on or not, in my opinion.
i'm not against c++ interop, i'm just against making it high-priority task. it's not something that we *must* have, it's just a "good-to-have feature", nothing more.
This is Walter's and my vision. He is working on C++ interop, and I am working on C++ interop. To the extent possible we hope the larger community will share this vision and help us to implement it. Andrei
Sep 24 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 07:44:38 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 Your guidance of my career is uncalled for.
excuse me, i'm not trying to tell you what to do. neither i was trying to say that you are forced to work on the features you don't want. same for Walter. it was a bad example and i missed the thing i wanted to highlight.
 This is Walter's and my vision. He is working on C++ interop, and I
 am working on C++ interop. To the extent possible we hope the larger=20
 community will share this vision and help us to implement it.
from my side i'm trying to at least not hijacking technical threads (sorry if i sent some of my rants to some of them, it's by accident). this is the maximum of my possible help in this area.
Sep 24 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 24 September 2014 at 07:41:48 UTC, ketmar via 
Digitalmars-d wrote:
 On Tue, 23 Sep 2014 23:24:21 -0700
 Andrei Alexandrescu via Digitalmars-d 
 <digitalmars-d puremagic.com>
 wrote:

 I completely agree. Lets focus on the D users we actually 
 have, not
 some imaginary C++ users that will come running as soon as 
 there is
 enough C++ support.
Those are very real. I know this for a fact. -- Andrei
all three of them.
I don't understand how it isn't obvious how important C++ interop would be in getting new users to switch. I especially don't understand it since it's been mentioned several times so far how important C interop was for C++'s adoption. I write unit tests at work for C code in C++. Because I can. To this day, C++ is getting used because it's C-compatible. Atila
Sep 24 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 09:33:30 +0000
Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I don't understand how it isn't obvious how important C++ interop=20
 would be in getting new users to switch.
'cause it's not.
 I especially don't=20
 understand it since it's been mentioned several times so far how=20
 important C interop was for C++'s adoption.
you are wrong. C++ is *almost* compatible with C *on* *the* *source* *code* *level*. D is not and will not. this is two completely unrelated things. it doesn't matter how hard D will try to interop with C++, D will never be able to compile C++ code. yet C++ *is* able to compile C code. do you see any difference here?
Sep 24 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 04:46:00AM +0000, Sean Kelly via Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 03:44:52 UTC, Walter Bright wrote:
On 9/23/2014 7:29 AM, Sean Kelly wrote:
[...]
 There's clearly been a lot of attention paid to bug fixes.  But for
 the rest... I feel like the overall direction is towards whatever is
 currently thought to gain the most new users.  The thing is that D has
 already *got* me.  What I want is for the language I've already got to
 be polished until I can use it in a freaking space telescope.  I'm
 sick of "yes but" languages.  Every time I hit an obstacle in D I
 think "oh great, D is way behind other languages in all these ways and
 D itself is broken to boot.  Why am I using this again?"  And it could
 be a tiny thing.  It doesn't matter.  Every little issue like that is
 magnified a thousandfold because D is already such a hard sell.
Yeah, I wish that at least *some* attention would be paid to refining existing features so that problematic corner cases could be ironed out. Like identifier lookup rules for local imports. And what to do about dtors. And so many little niggling details that seem minor, but added together, can form a pretty big mountain of frustration sometimes. [...]
And inconsistency in how things work in the language.  Oh, and
function attributes.  I'm sure someone likes them, but I'm drowning
in pure system const immutable  nogc  illegitemate  wtf hell.
Fortunately, those attributes are inferred for template functions. I did try to extend that to auto functions, but got a lot of resistance.
I support attribute inference for auto functions. The more inference, the better, I say. That's the only way attributes will become practically useful.
 Yes, the inference is very nice.  And I do see the use for each
 attribute.  It's just... when I look at a function and there's a line
 of attributes before the function declaration that have nothing to do
 with what the function actually does but rather with how it's
 implemented, it's just syntactic noise.  It's information for the
 compiler, not me as a user.  I hope we'll eventually get to the point
 where everything is inferred and the attributes disappear entirely.
I haven't actually tried this yet, but I'm been toying with the idea of writing *all* functions as template functions (except where impossible, like virtual functions), even if they would take only zero compile-time arguments. This way, I reap the benefits of attribute inference, *and* I also get automatic culling of unused functions from the executable ('cos they wouldn't be instantiated in the first place). T -- Just because you survived after you did it, doesn't mean it wasn't stupid!
Sep 23 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 10:10 PM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, I wish that at least *some* attention would be paid to refining
 existing features so that problematic corner cases could be ironed out.
It's kinda maddening to hear statements like that. Just in 2.066: 103 compiler regressions fixed 235 compiler bugs fixed 39 language enhancements 12 phobos regressions fixed 110 phobos bugs fixed 41 phobos enhancements 9 druntime regressions fixed 17 druntime bugs fixed 9 druntime enhancements https://dlang.org/changelog.html#list2066
 Like identifier lookup rules for local imports.
Suddenly this issue goes to a mountain overnight. Is it really the most critical, important problem, overshadowing everything else?
 And what to do about
 dtors. And so many little niggling details that seem minor, but added
 together, can form a pretty big mountain of frustration sometimes.
So help out!
 I haven't actually tried this yet, but I'm been toying with the idea of
 writing *all* functions as template functions (except where impossible,
 like virtual functions), even if they would take only zero compile-time
 arguments. This way, I reap the benefits of attribute inference, *and* I
 also get automatic culling of unused functions from the executable ('cos
 they wouldn't be instantiated in the first place).
Yup, give it a try.
Sep 23 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 10:37 PM, Walter Bright wrote:
 On 9/23/2014 10:10 PM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, I wish that at least *some* attention would be paid to refining
 existing features so that problematic corner cases could be ironed out.
So help out!
I note that you've had many contributions accepted, this is great: https://github.com/D-Programming-Language/dmd/pulls?q=is%3Apr+is%3Aclosed+author%3Aquickfur https://github.com/D-Programming-Language/phobos/pulls?q=is%3Apr+author%3Aquickfur So please work on the refinements you wish for!
Sep 23 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 24/09/14 07:37, Walter Bright wrote:

 So help out!
You always say we should help out instead of complaining. But where are all the users that want C++ support. Let them implement it instead and lets us focus on actual D users we have now. -- /Jacob Carlborg
Sep 23 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 11:20 PM, Jacob Carlborg wrote:
 On 24/09/14 07:37, Walter Bright wrote:

 So help out!
You always say we should help out instead of complaining. But where are all the users that want C++ support. Let them implement it instead and lets us focus on actual D users we have now.
This thinking is provincial and damaging. We need to focus on both retaining our current users as well as in getting to the next level of magnitude. And for that we need C++ compatibility and improving everything about the GC story. -- Andrei
Sep 23 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 11:20 PM, Jacob Carlborg wrote:
 On 24/09/14 07:37, Walter Bright wrote:
 So help out!
You always say we should help out instead of complaining.
That's right. Complaining does nothing.
 But where are all the users that want C++ support. Let them implement it
instead and lets us focus on
 actual D users we have now.
I was there at the C++ revolution (and it was a revolution) almost at the beginning. And in fact, a reasonable case could be made that I caused the success of C++ by providing an inexpensive C++ compiler on the most popular (by far) platform at the right moment. What sold C++ was you could "ease" on into it because it would compile your existing C code. Later on, other C++ compilers came out. I'd talk to my sales staff at Zortech, asking them how they sold Zortech C++. What was the line that sold the customer. They told me "Zortech C++ is the only C++ compiler that can generate 16 bit Windows code. End of story. Sold!" I.e. none of the features of ZTC++ mattered, except one killer feature that nobody else had, and that nobody else even had a story for. Now, consider interfacing with existing C++ code. Which top 10 Tiobe languages can? C: no Java: no Objective C: sort of http://philjordan.eu/article/strategies-for-using-c++-in-objective-c-projects C++: yes Basic: no PHP: no Python: no Javascript: no Transact-SQL: no and: Go: no Rust: no The ones marked "no" have no plan, no story, no nothing. This means if we have some level of C++ interop, we have a killer feature. If users have a "must have" C++ library, they can hook up to it. Can they use other languages? Nope. They have to wrap it with a C interface, or give up. Wrapping with a C interface tends to fall apart when any C++ templates are involved. C++ libraries are currently a language "lock in" to C++. There are no options. I've often heard from people that they'd like to try D, but it's pointless because they are not going to rewrite their C++ libraries. Case in point: last fall Adam Wilson started on Aurora, a C++ Cinder clone. I had thought he could simply wrap to the C++ library. No way. It was just unworkable, and Cinder was too big to rewrite. Aurora was abandoned. If we could have interfaced to it, things would have been much different. This story is not unusual. That said, C++ interop is never going to be easy for users. We're just trying to make it possible for a savvy and determined user. And he'll have to be flexible on both the C++ side and the D side.
Sep 23 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 23 Sep 2014 23:54:32 -0700
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 This means if we have some level of C++ interop, we have a killer
 feature.
and if we have OCR in phobos we have a killer feature. hey, i know two users that will switch to D if D will have good OCR in standard library! yet i know noone who will switch to D if D will get good c++ interop. note that i have many friends who are programmers and almost none of them interested in OCR. what i want to say is that c++ interop topic is overhyped. but lets just check it: start a little survey with one question: "how many people you know that will surely switch to D if it will get great c++ interop?" i for myself already know the results of such survey.
Sep 24 2014
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 24 September 2014 at 06:54:38 UTC, Walter Bright 
wrote:
 On 9/23/2014 11:20 PM, Jacob Carlborg wrote:
 On 24/09/14 07:37, Walter Bright wrote:
 So help out!
You always say we should help out instead of complaining.
That's right. Complaining does nothing.
 But where are all the users that want C++ support. Let them 
 implement it instead and lets us focus on
 actual D users we have now.
I was there at the C++ revolution (and it was a revolution) almost at the beginning. And in fact, a reasonable case could be made that I caused the success of C++ by providing an inexpensive C++ compiler on the most popular (by far) platform at the right moment. What sold C++ was you could "ease" on into it because it would compile your existing C code. Later on, other C++ compilers came out. I'd talk to my sales staff at Zortech, asking them how they sold Zortech C++. What was the line that sold the customer. They told me "Zortech C++ is the only C++ compiler that can generate 16 bit Windows code. End of story. Sold!" I.e. none of the features of ZTC++ mattered, except one killer feature that nobody else had, and that nobody else even had a story for. Now, consider interfacing with existing C++ code. Which top 10 Tiobe languages can? C: no Java: no Objective C: sort of http://philjordan.eu/article/strategies-for-using-c++-in-objective-c-projects C++: yes Basic: no PHP: no Python: no Javascript: no Transact-SQL: no and: Go: no Rust: no The ones marked "no" have no plan, no story, no nothing. This means if we have some level of C++ interop, we have a killer feature. If users have a "must have" C++ library, they can hook up to it. Can they use other languages? Nope. They have to wrap it with a C interface, or give up. Wrapping with a C interface tends to fall apart when any C++ templates are involved. C++ libraries are currently a language "lock in" to C++. There are no options. I've often heard from people that they'd like to try D, but it's pointless because they are not going to rewrite their C++ libraries. Case in point: last fall Adam Wilson started on Aurora, a C++ Cinder clone. I had thought he could simply wrap to the C++ library. No way. It was just unworkable, and Cinder was too big to rewrite. Aurora was abandoned. If we could have interfaced to it, things would have been much different. This story is not unusual. That said, C++ interop is never going to be easy for users. We're just trying to make it possible for a savvy and determined user. And he'll have to be flexible on both the C++ side and the D side.
In any case I agree with you. C++ got successful thanks to your work and other vendors that were bundling it with their C compilers. If C++ was a separate product that companies had to buy extra on top of their C compilers, it would have failed. -- Paulo
Sep 24 2014
prev sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 24 September 2014 at 06:54:38 UTC, Walter Bright 
wrote:
 If users have a "must have" C++ library, they can hook up to 
 it. Can they use other languages? Nope. They have to wrap it 
 with a C interface, or give up. Wrapping with a C interface 
 tends to fall apart when any C++ templates are involved.
That's exactly the case for the company I work for. --- /Paolo
Sep 24 2014
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 9/24/14, 3:20 AM, Jacob Carlborg wrote:
 On 24/09/14 07:37, Walter Bright wrote:

 So help out!
You always say we should help out instead of complaining. But where are all the users that want C++ support. Let them implement it instead and lets us focus on actual D users we have now.
Maybe Facebook needs D to interface with C++?
Sep 25 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-09-25 21:02, Ary Borenszweig wrote:

 Maybe Facebook needs D to interface with C++?
But I only see Andrei working on that. Don't know how much coding he does in practice for C++ compatibility. -- /Jacob Carlborg
Sep 25 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Sep 23, 2014 at 10:37:59PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/23/2014 10:10 PM, H. S. Teoh via Digitalmars-d wrote:
Yeah, I wish that at least *some* attention would be paid to refining
existing features so that problematic corner cases could be ironed
out.
It's kinda maddening to hear statements like that. Just in 2.066: 103 compiler regressions fixed 235 compiler bugs fixed 39 language enhancements 12 phobos regressions fixed 110 phobos bugs fixed 41 phobos enhancements 9 druntime regressions fixed 17 druntime bugs fixed 9 druntime enhancements https://dlang.org/changelog.html#list2066
Like identifier lookup rules for local imports.
Suddenly this issue goes to a mountain overnight. Is it really the most critical, important problem, overshadowing everything else?
No, I just named it as a representative case of many such wrinkles within existing language features. The fact of the matter is, wherever you turn, there's always something else that hasn't been fully ironed out yet. Features that interact with each other in unexpected ways. Corner cases that weren't considered / are hard to fix due to the nature of the features involved. Fixes that require a decision -- which are often neglected because there are too many other things being worked on. Sometimes I wish there were less features in D, but far more refined. I'd rather go without the myriad of awesome features in D if I can only have a small set of features that have been fully worked out such that there are no nasty corner cases, deep-seated compiler bugs, or surprising gotchas that lurk around the corner as soon as you start writing non-trivial code.
And what to do about dtors. And so many little niggling details that
seem minor, but added together, can form a pretty big mountain of
frustration sometimes.
So help out!
I am, as you yourself point out later. But it's frustrating when pull requests sit in the queue for weeks (sometimes months, or, in the case of dmd pulls, *years*) without any indication of whether it's on the right track, and dismaying when your PR is just one of, oh, 100+ others that also all need attention, many of which are just languishing there for lack of attention even though there is nothing obviously blocking them, except perhaps the reviewers' / committers' time / interest. The situation with Phobos has improved dramatically, thanks to a well-timed rant some months ago, which triggered a coordinated effort of aggressive merging, pinging, reviewing, etc. -- we've managed to cut the Phobos PR queue from around 90+ to around 29 as of last night (from 4+ pages on github to only barely 2 pages). For that, I applaud my fellow Phobos reviewers, and I hope the trend will continue until the Phobos PR queue is firmly back at 1 page (<=25 open PRs). Unfortunately, the problem persists in druntime, dlang.org, and dmd. It feels like there's a forest fire raging and only a handful of firefighters, and now we want to add more fires (new development directions) without adding more people. What about reviewing and merging / rejecting the 100+ PRs in the dmd queue, most of which contain fixes and language improvements that people have been waiting for, for a long time, before we think about new directions? Some PRs appear to fix bugs opened *years* ago, and yet nothing is done about them. Some PRs are at an impasse due to decisions that need to be made, yet nobody is making said decisions or even discussing them, and the PRs just continue to rot there. There are already a *ton* of new features / fixes / language improvements that are waiting to be decided upon (and, given the size of the dmd PR queue, I submit this is no exaggeration), and yet we are indifferent and instead look away to new directions. Many of us are indeed ready to help, but it would *really* be nice if our help is received more readily, or, at the very least, *some* indication is shown that more effort will be put into reviewing said help before more energy is expended in new directions. The dramatic shortening of the Phobos PR queue over the last 2-3 months proves that this is not only possible, but also beneficial (more fixes are making it into Phobos than ever before) and improves morale (people are more likely to contribute if they don't have to wait 3 weeks before getting any feedback for their contribution) -- if we would only put our efforts into it. So here's to hoping that the other PR queues will shorten quickly in the near future. ;-) And mind you, the lengths of the PR queues are only the tip of the iceberg of stuff we ought to finish doing before embarking on the next new direction. There are a ton of old bugs that need attention, and a ton of language features that need improvement / refinement. We could at least pay those *some* heed even if we absolutely have to start something new right now at this moment. It would be a great tragedy if D goes down in history as the project that had so many great ideas, none of them carried out to completion. T -- Debian GNU/Linux: Cray on your desktop.
Sep 24 2014
next sibling parent "bachmeier" <no spam.com> writes:
On Wednesday, 24 September 2014 at 18:46:29 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Tue, Sep 23, 2014 at 10:37:59PM -0700, Walter Bright via 
 Digitalmars-d wrote:
 On 9/23/2014 10:10 PM, H. S. Teoh via Digitalmars-d wrote:
Yeah, I wish that at least *some* attention would be paid to 
refining
existing features so that problematic corner cases could be 
ironed
out.
It's kinda maddening to hear statements like that. Just in 2.066: 103 compiler regressions fixed 235 compiler bugs fixed 39 language enhancements 12 phobos regressions fixed 110 phobos bugs fixed 41 phobos enhancements 9 druntime regressions fixed 17 druntime bugs fixed 9 druntime enhancements https://dlang.org/changelog.html#list2066
Like identifier lookup rules for local imports.
Suddenly this issue goes to a mountain overnight. Is it really the most critical, important problem, overshadowing everything else?
No, I just named it as a representative case of many such wrinkles within existing language features. The fact of the matter is, wherever you turn, there's always something else that hasn't been fully ironed out yet. Features that interact with each other in unexpected ways. Corner cases that weren't considered / are hard to fix due to the nature of the features involved. Fixes that require a decision -- which are often neglected because there are too many other things being worked on. Sometimes I wish there were less features in D, but far more refined. I'd rather go without the myriad of awesome features in D if I can only have a small set of features that have been fully worked out such that there are no nasty corner cases, deep-seated compiler bugs, or surprising gotchas that lurk around the corner as soon as you start writing non-trivial code.
And what to do about dtors. And so many little niggling 
details that
seem minor, but added together, can form a pretty big 
mountain of
frustration sometimes.
So help out!
I am, as you yourself point out later. But it's frustrating when pull requests sit in the queue for weeks (sometimes months, or, in the case of dmd pulls, *years*) without any indication of whether it's on the right track, and dismaying when your PR is just one of, oh, 100+ others that also all need attention, many of which are just languishing there for lack of attention even though there is nothing obviously blocking them, except perhaps the reviewers' / committers' time / interest. The situation with Phobos has improved dramatically, thanks to a well-timed rant some months ago, which triggered a coordinated effort of aggressive merging, pinging, reviewing, etc. -- we've managed to cut the Phobos PR queue from around 90+ to around 29 as of last night (from 4+ pages on github to only barely 2 pages). For that, I applaud my fellow Phobos reviewers, and I hope the trend will continue until the Phobos PR queue is firmly back at 1 page (<=25 open PRs). Unfortunately, the problem persists in druntime, dlang.org, and dmd. It feels like there's a forest fire raging and only a handful of firefighters, and now we want to add more fires (new development directions) without adding more people. What about reviewing and merging / rejecting the 100+ PRs in the dmd queue, most of which contain fixes and language improvements that people have been waiting for, for a long time, before we think about new directions? Some PRs appear to fix bugs opened *years* ago, and yet nothing is done about them. Some PRs are at an impasse due to decisions that need to be made, yet nobody is making said decisions or even discussing them, and the PRs just continue to rot there. There are already a *ton* of new features / fixes / language improvements that are waiting to be decided upon (and, given the size of the dmd PR queue, I submit this is no exaggeration), and yet we are indifferent and instead look away to new directions. Many of us are indeed ready to help, but it would *really* be nice if our help is received more readily, or, at the very least, *some* indication is shown that more effort will be put into reviewing said help before more energy is expended in new directions. The dramatic shortening of the Phobos PR queue over the last 2-3 months proves that this is not only possible, but also beneficial (more fixes are making it into Phobos than ever before) and improves morale (people are more likely to contribute if they don't have to wait 3 weeks before getting any feedback for their contribution) -- if we would only put our efforts into it. So here's to hoping that the other PR queues will shorten quickly in the near future. ;-) And mind you, the lengths of the PR queues are only the tip of the iceberg of stuff we ought to finish doing before embarking on the next new direction. There are a ton of old bugs that need attention, and a ton of language features that need improvement / refinement. We could at least pay those *some* heed even if we absolutely have to start something new right now at this moment. It would be a great tragedy if D goes down in history as the project that had so many great ideas, none of them carried out to completion. T
Being an outsider, I haven't wanted to jump into this discussion, but can't hold back after reading your post. The things you're talking about here are what was going through my mind a few days ago when Andrei said the only thing that has value is C++ compatibility. That doesn't make sense to me when you've just released a compiler with a bunch of regressions and there are all these other issues. I'm happy that someone is working on valuable new long-term projects, but I don't see how anyone could describe the things you've listed as being unworthy of attention.
Sep 24 2014
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"H. S. Teoh via Digitalmars-d"  wrote in message 
news:mailman.1573.1411584389.5783.digitalmars-d puremagic.com...

 I am, as you yourself point out later. But it's frustrating when pull
 requests sit in the queue for weeks (sometimes months, or, in the case
 of dmd pulls, *years*) without any indication of whether it's on the
 right track, and dismaying when your PR is just one of, oh, 100+ others
 that also all need attention, many of which are just languishing there
 for lack of attention even though there is nothing obviously blocking
 them, except perhaps the reviewers' / committers' time / interest.
This is a misleading description of the situation with dmd pull requests. There are lots of open pull requests, but the number has stayed fairly stable at ~100 for a long time. This means they are getting merged or closed at the same rate they are created. Some of them have certainly been forgotten by reviewers (sorry) but most of them need work, or implement controversial or questionable features. The situation is harder (IMO) than with phobos because changes usually touch multiple systems in the compiler, even if the diff only touches a single file. Things could always be better (can we clone Walter and Kenji yet?) but the thing holding back issue XYZ is almost always the people who care about XYZ haven't fixed it yet, and the people who are fixing things don't care about XYZ. This includes not only making patches, but convincing others it's something worth caring about. Everybody has a different set of priorities they want everybody else to share.
Sep 24 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 08:20:32AM +1000, Daniel Murphy via Digitalmars-d wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message
 news:mailman.1573.1411584389.5783.digitalmars-d puremagic.com...
 
I am, as you yourself point out later. But it's frustrating when pull
requests sit in the queue for weeks (sometimes months, or, in the
case of dmd pulls, *years*) without any indication of whether it's on
the right track, and dismaying when your PR is just one of, oh, 100+
others that also all need attention, many of which are just
languishing there for lack of attention even though there is nothing
obviously blocking them, except perhaps the reviewers' / committers'
time / interest.
This is a misleading description of the situation with dmd pull requests. There are lots of open pull requests, but the number has stayed fairly stable at ~100 for a long time. This means they are getting merged or closed at the same rate they are created. Some of them have certainly been forgotten by reviewers (sorry) but most of them need work, or implement controversial or questionable features.
IMNSHO, any PR that haven't been touched in more than, say, 1-2 months, should just be outright closed. If/when the people involved have time to work on it again, it can be reopened. If a feature is questionable or controversial, shouldn't it be discussed on the forum and then a decision made? Ignoring controversial PRs isn't getting us anywhere. At the very least, if we can't decide, the PR should be closed (the submitter can just reopen it later once he manages to convince people that it's worthwhile -- that's what git branches are for). [...]
 Things could always be better (can we clone Walter and Kenji yet?)
Yeah, if we could clone Kenji, it would speed things up dramatically. :-)
 but the thing holding back issue XYZ is almost always the people who
 care about XYZ haven't fixed it yet, and the people who are fixing
 things don't care about XYZ.  This includes not only making patches,
 but convincing others it's something worth caring about.  Everybody
 has a different set of priorities they want everybody else to share.
I wish people would just make a decision about PRs, even if it's just to close it with "sorry this is not worth the time", than to silently ignore it and hope it would somehow go away on its own. T -- Always remember that you are unique. Just like everybody else. -- despair.com
Sep 24 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"H. S. Teoh via Digitalmars-d"  wrote in message 
news:mailman.1605.1411597973.5783.digitalmars-d puremagic.com...

 IMNSHO, any PR that haven't been touched in more than, say, 1-2 months,
 should just be outright closed. If/when the people involved have time to
 work on it again, it can be reopened. If a feature is questionable or
 controversial, shouldn't it be discussed on the forum and then a
 decision made? Ignoring controversial PRs isn't getting us anywhere. At
 the very least, if we can't decide, the PR should be closed (the
 submitter can just reopen it later once he manages to convince people
 that it's worthwhile -- that's what git branches are for).
If they're abandoned. Closing pull requests because Walter hasn't made a decision yet would be a terrible policy. Many forum discussions produce only "i want this" responses and provide no useful review on the design or implementation.
 I wish people would just make a decision about PRs, even if it's just to
 close it with "sorry this is not worth the time", than to silently
 ignore it and hope it would somehow go away on its own.
I'd want that too, if those were the only two options.
Sep 24 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 11:44 AM, H. S. Teoh via Digitalmars-d wrote:
 No, I just named it as a representative case of many such wrinkles
 within existing language features. The fact of the matter is, wherever
 you turn, there's always something else that hasn't been fully ironed
 out yet. Features that interact with each other in unexpected ways.
 Corner cases that weren't considered / are hard to fix due to the nature
 of the features involved. Fixes that require a decision -- which are
 often neglected because there are too many other things being worked on.
I don't know of any language that is "fully ironed out". There's no such thing. I can give you a list of such things with C. Furthermore, if your car is missing wheels, spending all your time getting the paint job perfect isn't getting the car into usable condition. Corner cases are, by definition, in the corners, not the center of the road. Corner cases need to be addressed, but they are not in the way of getting s**t done, and getting s**t is the primary purpose of a programming language. I know I tend to focus on issues that block people from getting useful work done. Those aren't the corner cases. For example, the local import thing that suddenly became critical - it is not critical. (We still need to address it.) If it is causing you problems, you can: 1. not use local imports, put them in the global scope 2. stick with short local names - if a module is exporting a symbol named 'i', whoever wrote that module needs to receive a strongly worded letter (!). It's still good that the import issue is brought up, and we need to make it work better. But it is not critical, and does not prevent people from getting work done. The C++ interop, on the other hand, DOES block people from getting work done.
 Sometimes I wish there were less features in D, but far more refined.
 I'd rather go without the myriad of awesome features in D if I can only
 have a small set of features that have been fully worked out such that
 there are no nasty corner cases, deep-seated compiler bugs, or
 surprising gotchas that lurk around the corner as soon as you start
 writing non-trivial code.
A language that doesn't do much of anything is fairly easy to get right - and the very first thing users will do is propose extensions. May I say that "awesome features" are proposed here EVERY SINGLE DAY, they label the features as absolutely critical, usually by the people who argue the next day that D overreaches, or sometimes even in the same post.
 I am, as you yourself point out later. But it's frustrating when pull
 requests sit in the queue for weeks (sometimes months, or, in the case
 of dmd pulls, *years*) without any indication of whether it's on the
 right track, and dismaying when your PR is just one of, oh, 100+ others
 that also all need attention, many of which are just languishing there
 for lack of attention even though there is nothing obviously blocking
 them, except perhaps the reviewers' / committers' time / interest.
For some perspective, there are currently 98 open PRs for dmd, and more importantly, 3,925 closed ones. There are 39 resolved ones for every unresolved one. Furthermore, many of my own PRs have sat there for years. std.halffloat, anyone? I find it frustrating, too.
 We could at least pay those *some* heed
We are not sitting around doing nothing.
Sep 24 2014
prev sibling next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 23 Sep 2014 21:59:53 -0700
Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I understand quite thoroughly why c++ support is a big win
i believe it's not. so-called "enterprise" will not choose D for many reasons, and "c++ interop" is on the bottom of the list. seasoned c++ developer will not migrate to D for many reasons (or he already did that, but then he is not c++ developer anymore), and "c++ interop" is not on the top of the list, not even near the top. all that gory efforts aimed to "c++ interop" will bring three and a half more users. there will be NO massive migration due to "better c++ interop". yet this feature is on the top of the list now. i'm sad. seems that i (we?) have no choice except to wait until people will get enough of c++ games and will became focused on D again. porting and merging CDGC is much better target which help people already using D, but... but imaginary "future adopters" seems to be the highest priority. too bad that they will never arrive.
Sep 23 2014
next sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Wednesday, 24 September 2014 at 05:44:15 UTC, ketmar via 
Digitalmars-d wrote:
 On Tue, 23 Sep 2014 21:59:53 -0700
 Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 I understand quite thoroughly why c++ support is a big win
i believe it's not. so-called "enterprise" will not choose D for many reasons, and "c++ interop" is on the bottom of the list. seasoned c++ developer will not migrate to D for many reasons (or he already did that, but then he is not c++ developer anymore), and "c++ interop" is not on the top of the list, not even near the top. all that gory efforts aimed to "c++ interop" will bring three and a half more users. there will be NO massive migration due to "better c++ interop". yet this feature is on the top of the list now. i'm sad. seems that i (we?) have no choice except to wait until people will get enough of c++ games and will became focused on D again. porting and merging CDGC is much better target which help people already using D, but... but imaginary "future adopters" seems to be the highest priority. too bad that they will never arrive.
Why does anyone have to *wait* for anything? I'm not seeing the blocking issues regarding attempts to fix the language. People are making PRs, people are discussing and testing ideas, and there appear to be enough people to tackle several problems at once (typedefs, C++ interop, GC/RC issues, weirdness with ref and auto, import symbol shadowing, etc.) Maybe things aren't moving as swiftly as we would like in the areas which are most impactful *to us* but that is the nature of free software. Has it ever been any other way than that the things which get the most attention are the things which the individual contributors are the most passionate about (whether their passion is justified or not?)
Sep 23 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 06:07:54 +0000
Cliff via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Why does anyone have to *wait* for anything?
'cause compiler and libs are complex beasts. there are people that have the necessary knowledge and they can write things faster (and better). i'm sure that if Walter or Andrei made official claim "we want CDGC in DMD as official GC", that people will start some serious hacking. there is alot more motivation to hack on something if people know that their work is much wanted in mainline DMD. CDGC is not worse that current GC for windows and MUCH better for *nix. it's a clear and instant win. c++ interop is... well, questionable. note that "GC is top priority" is not the same as "poring CDGC is top priority". the first is "ok, let's think about it" and the second is "ok, we know what to do".
Sep 24 2014
prev sibling next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
 seasoned c++ developer will not migrate to D for many reasons 
 (or he
 already did that, but then he is not c++ developer anymore), 
 and "c++
 interop" is not on the top of the list, not even near the top.
This isn't true. I'm a C++ developer who migrated to D. I'm still (also) a C++ developer. And a D developer. And a Python developer. And... If I had C++ interop _today_ I'd convert some of our unit testing utils for Google Test that we use at work to D. Today. Atila
Sep 24 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 07:56:58 +0000
Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 This isn't true. I'm a C++ developer who migrated to D. I'm still=20
 (also) a C++ developer. And a D developer. And a Python=20
 developer. And...
so you aren't migrated. using D for some throwaway utils and so on is not "migrating". "migrating" is "most of codebase in D".
Sep 24 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 24 September 2014 at 08:04:18 UTC, ketmar via 
Digitalmars-d wrote:
 On Wed, 24 Sep 2014 07:56:58 +0000
 Atila Neves via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 This isn't true. I'm a C++ developer who migrated to D. I'm 
 still (also) a C++ developer. And a D developer. And a Python 
 developer. And...
so you aren't migrated. using D for some throwaway utils and so on is not "migrating". "migrating" is "most of codebase in D".
Most of us cannot afford to be a "Technology X" developer. Every project, every client is a complete new world. -- Paulo
Sep 24 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 09:15:27 +0000
Paulo  Pinto via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Most of us cannot afford to be a "Technology X" developer.
 Every project, every client is a complete new world.
yeah. and so there is *no* *reason* to stress c++ interop, heh. 'cause "client dictates language" anyway. i like it.
Sep 24 2014
parent "Chris" <wendlec tcd.ie> writes:
On Wednesday, 24 September 2014 at 09:57:06 UTC, ketmar via 
Digitalmars-d wrote:
 On Wed, 24 Sep 2014 09:15:27 +0000
 Paulo  Pinto via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 Most of us cannot afford to be a "Technology X" developer.
 Every project, every client is a complete new world.
yeah. and so there is *no* *reason* to stress c++ interop, heh. 'cause "client dictates language" anyway. i like it.
I do understand your concerns and I once mentioned in a thread that companies who use D should not dictate the way D evolves, instead it should remain community driven. Say for example D were used on web servers and a huge amount of effort was directed towards turning D into a web server language, while other important features/improvements were neglected, that'd be bad. However, in the case of C++ I must say that it is important. One of the reasons I opted for D was (and still is) its seamless C-integration. It allowed me to use so much existing code in C, libraries I would and could never have rewritten myself in D. There are loads of C(++) libraries out there you might want to or have to use for a particular project. When I started using D the Unicode module lacked certain features I needed. I just used an existing C library and could go on with the real program in D. Hadn't this been possible, the project in D would have died right then and there. Now D itself has the features I needed back then, but the C library was essential to bridge the gap. I think a lot of C++ programmers would do the same. Start a new project in D resting assured they can still use their carefully built code bases in C++. So I think interop with C++ is important. And don't forget that the reality is that most people interested in (yet reluctant about) D are C++ programmers, at least that's the impression I get here.
Sep 24 2014
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 24 September 2014 at 05:44:15 UTC, ketmar via
Digitalmars-d wrote:
 On Tue, 23 Sep 2014 21:59:53 -0700
 Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 I understand quite thoroughly why c++ support is a big win
i believe it's not. so-called "enterprise" will not choose D for many reasons, and "c++ interop" is on the bottom of the list. seasoned c++ developer will not migrate to D for many reasons (or he already did that, but then he is not c++ developer anymore), and "c++ interop" is not on the top of the list, not even near the top. all that gory efforts aimed to "c++ interop" will bring three and a half more users. there will be NO massive migration due to "better c++ interop". yet this feature is on the top of the list now. i'm sad. seems that i (we?) have no choice except to wait until people will get enough of c++ games and will became focused on D again. porting and merging CDGC is much better target which help people already using D, but... but imaginary "future adopters" seems to be the highest priority. too bad that they will never arrive.
With the current move of having of more support for native code as part of the standard toolchains for Java (SubstrateVM, Sumatra, Valhalla, Panama) and .NET compilers (MDIL, .NET Native, RyuJIT all using the Visual C++ backend). The beloved enterprise has lots of reasons to stay with the current tooling when seeking performance. -- Paulo
Sep 24 2014
prev sibling parent reply "ponce" <contact gamesfrommars.fr> writes:
On Wednesday, 24 September 2014 at 05:44:15 UTC, ketmar via 
Digitalmars-d wrote:
 On Tue, 23 Sep 2014 21:59:53 -0700
 Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 I understand quite thoroughly why c++ support is a big win
i believe it's not.
Every C++ shop I've been has one or several C++ codebase that just can't just be rewritten while the customers patiently wait. To thrive in the enterprise D must wait for a greenfield project with zero pre-existing source files (ie. rare), be a small project, or be able to interact with the legacy codebase. I think Andrei accurately identified C++ interop as the top-goal.
Sep 24 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 09:28:25 +0000
ponce via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 To thrive in the enterprise D must wait for a greenfield project=20
 with zero pre-existing source files (ie. rare), be a small=20
 project, or be able to interact with the legacy codebase.
=20
 I think Andrei accurately identified C++ interop as the top-goal.
no sane management (and insane too, even more) will resist to adding new language to codebase without really strong arguments. especially not hyped language. managers knows about java, they heard about c++ and they don't care what that "D" is. no c++ interop can change this. so: c++ interop will help D adoption in enterprise =3D=3D false. this thing alone moves c++ interop to the bottom of the list.
Sep 24 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 24 September 2014 at 10:02:20 UTC, ketmar via 
Digitalmars-d wrote:
 On Wed, 24 Sep 2014 09:28:25 +0000
 no sane management (and insane too, even more) will resist to 
 adding
 new language to codebase without really strong arguments.
This is starting to be a little offensive... --- /Paolo
Sep 24 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 13:16:23 +0000
Paolo Invernizzi via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 This is starting to be a little offensive...
sorry, i don't meant to. excuse me if i'm getting rude and iffensive.
Sep 24 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 2:14 PM, ketmar via Digitalmars-d wrote:
 On Wed, 24 Sep 2014 13:16:23 +0000
 Paolo Invernizzi via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 This is starting to be a little offensive...
sorry, i don't meant to. excuse me if i'm getting rude and iffensive.
Much appreciated. -- Andrei
Sep 24 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/23/14, 9:46 PM, Sean Kelly wrote:
 So in that respect I understand the push for C++ support because that's
 the speed bump that Andrei has hit.  But here's the thing... by pursuing
 this we're effectively focusing all of our efforts *on another
 language*.  And we're doing so when D itself still needs a lot of work.
 Maybe not in any truly immense ways, but as I said before, those tiny
 things can seem huge when you're already struggling to justify just
 using the language at all. Maybe all this will pull together into a
 cohesive whole, but so far it feels kind of disconnected.  So that's
 part of what I meant by "tightening up."
You need a spoon of rennet to turn a bucket of milk into a bucket of yogurt. No matter how much milk you add, that won't help. You want to add milk. I know we must add rennet. -- Andrei
Sep 23 2014
prev sibling parent "Thomas Mader" <thomas.mader gmail.com> writes:
On Wednesday, 24 September 2014 at 04:46:01 UTC, Sean Kelly wrote:
 Yes, the inference is very nice.  And I do see the use for each 
 attribute.  It's just... when I look at a function and there's 
 a line of attributes before the function declaration that have 
 nothing to do with what the function actually does but rather 
 with how it's implemented, it's just syntactic noise.  It's 
 information for the compiler, not me as a user.  I hope we'll 
 eventually get to the point where everything is inferred and 
 the attributes disappear entirely.
What is the problem with complete automatic inference? Wouldn't it be possible to deduce the flags in the bottom up direction of a function call hierarchy? I guess it is possible for the compiler to see the right choice of flags for a function, which doesn't call other functions. E.g. make it safe if possible, nogc if possible and so on. Then it should process function after function until all functions are done. Thomas
Sep 23 2014
prev sibling parent reply "eles" <eles215 gzk.dot> writes:
On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:

 lack of attention paid to tightening up what we've already got 
 and deprecating old stuff that no one wants any more.  And 
 inconsistency in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
Sep 25 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 21:03:24 UTC, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:
BTW, am I the only one whose eyes/ears are suffering when reading this: std.algrithm "he stripLeft function will strip the front of the range, the stripRight function will strip the back of the range, while the strip function will strip both the front and back of the range. " Why not, for God's sake, stripFront and stripBack? In std.string too. What about the R2L languages?
Sep 25 2014
next sibling parent "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 21:10:51 UTC, eles wrote:
 On Thursday, 25 September 2014 at 21:03:24 UTC, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly 
 wrote:
BTW, am I the only one whose eyes/ears are suffering when reading this: std.algrithm "he stripLeft function will strip the front of the range, the stripRight function will strip the back of the range, while the strip function will strip both the front and back of the range. " Why not, for God's sake, stripFront and stripBack? In std.string too. What about the R2L languages?
OTOH, there is bringToFront() Really...
Sep 25 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
eles:

 "he stripLeft function will strip the front of the range, the 
 stripRight function will strip the back of the range, while the 
 strip function will strip both the front and back of the range. 
 "

 Why not, for God's sake, stripFront and stripBack?
Perhaps those names come from extending the names of "lstrip" and "rsplit" of Python string functions. Bye, bearophile
Sep 25 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 21:22:38 UTC, bearophile wrote:
 eles:

 "he stripLeft function will strip the front of the range, the 
 stripRight function will strip the back of the range, while 
 the strip function will strip both the front and back of the 
 range. "

 Why not, for God's sake, stripFront and stripBack?
Perhaps those names come from extending the names of "lstrip" and "rsplit" of Python string functions. Bye, bearophile
I find it too inconsitent. I doubt even Python programmers migrating to D love that... And, just: std.uni->std.unicode And I cannot believe that the language-defined complex types are still there (since the D1 days...). Either in the library, either in the language, but, please, pick *one* kind.
Sep 25 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 25 Sep 2014 21:37:17 +0000
eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I find it too inconsitent. I doubt even Python programmers=20
 migrating to D love that...
=20
 And, just: std.uni->std.unicode
=20
 And I cannot believe that the language-defined complex types are=20
 still there (since the D1 days...). Either in the library, either=20
 in the language, but, please, pick *one* kind.
it's too late to change it. at least that's what i've been told. imaginary future users will be scared.
Sep 25 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 2:10 PM, eles wrote:
 On Thursday, 25 September 2014 at 21:03:24 UTC, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:
BTW, am I the only one whose eyes/ears are suffering when reading this: std.algrithm "he stripLeft function will strip the front of the range, the stripRight function will strip the back of the range, while the strip function will strip both the front and back of the range. " Why not, for God's sake, stripFront and stripBack?
Because they are called stripLeft and stripRight. -- Andrei
Sep 25 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 25 September 2014 at 22:49:06 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 2:10 PM, eles wrote:
 Why not, for God's sake, stripFront and stripBack?
Because they are called stripLeft and stripRight. -- Andrei
Psh, they should be called stripHead and stripFoot. Or alternately, unHat and unShoe.
Sep 25 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 22:56:56 UTC, Sean Kelly wrote:
 On Thursday, 25 September 2014 at 22:49:06 UTC, Andrei 
 Alexandrescu wrote:
 On 9/25/14, 2:10 PM, eles wrote:
 Why not, for God's sake, stripFront and stripBack?
Because they are called stripLeft and stripRight. -- Andrei
Psh, they should be called stripHead and stripFoot. Or alternately, unHat and unShoe.
stripLady and stripGentleman?...
Sep 25 2014
parent "Cliff" <cliff.s.hudson gmail.com> writes:
On Thursday, 25 September 2014 at 23:04:55 UTC, eles wrote:
 On Thursday, 25 September 2014 at 22:56:56 UTC, Sean Kelly 
 wrote:
 On Thursday, 25 September 2014 at 22:49:06 UTC, Andrei 
 Alexandrescu wrote:
 On 9/25/14, 2:10 PM, eles wrote:
 Why not, for God's sake, stripFront and stripBack?
Because they are called stripLeft and stripRight. -- Andrei
Psh, they should be called stripHead and stripFoot. Or alternately, unHat and unShoe.
stripLady and stripGentleman?...
Ah, now this thread is going to interesting places! :P
Sep 25 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 10:56:55PM +0000, Sean Kelly via Digitalmars-d wrote:
 On Thursday, 25 September 2014 at 22:49:06 UTC, Andrei Alexandrescu wrote:
On 9/25/14, 2:10 PM, eles wrote:
Why not, for God's sake, stripFront and stripBack?
Because they are called stripLeft and stripRight. -- Andrei
Psh, they should be called stripHead and stripFoot. Or alternately, unHat and unShoe.
Nah, they should be behead() and amputate(). T -- People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird. -- D. Knuth
Sep 25 2014
parent "Mike James" <foo bar.com> writes:
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote in 
message news:mailman.1690.1411686833.5783.digitalmars-d puremagic.com...
 On Thu, Sep 25, 2014 at 10:56:55PM +0000, Sean Kelly via Digitalmars-d 
 wrote:
..
 Nah, they should be behead() and amputate().
Then the language would have to be called jihaDi...
Sep 26 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 25 Sep 2014 16:11:57 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 Nah, they should be behead() and amputate().
i like it! this makes language much more expressive.
Sep 25 2014
prev sibling parent "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 22:49:06 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 2:10 PM, eles wrote:
 On Thursday, 25 September 2014 at 21:03:24 UTC, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly 
 wrote:
BTW, am I the only one whose eyes/ears are suffering when reading this: std.algrithm "he stripLeft function will strip the front of the range, the stripRight function will strip the back of the range, while the strip function will strip both the front and back of the range. " Why not, for God's sake, stripFront and stripBack?
Because they are called stripLeft and stripRight. -- Andrei
One day you will need to call this guy here: http://dconf.org/2014/talks/meyers.html He will say: "I told you so. At DConf."
Sep 25 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 2:03 PM, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:

 lack of attention paid to tightening up what we've already got and
 deprecating old stuff that no one wants any more.  And inconsistency
 in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
I'm not sharing that feeling at all. From that perspective all languages are in need of a "serious cleanup". -- Andrei
Sep 25 2014
next sibling parent "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 22:48:12 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 2:03 PM, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly 
 wrote:

 lack of attention paid to tightening up what we've already 
 got and
 deprecating old stuff that no one wants any more.  And 
 inconsistency
 in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
I'm not sharing that feeling at all. From that perspective all languages are in need of a "serious cleanup". -- Andrei
Those *all* languages, at least some of them, have good excuses (C++'s roots in C and the great advantage that is able to compile C code too, to some extent - history is on his side) and powerful driving factors (the whole might of Oracle and Microsoft). And, most of all, they already have a fair share of the market. And they served a lot of users quite well and those users aren't ready to drop them quite easily for what happens to be the cool kid of the day in the programming language world. D has to provide quality in order to compensate for the( lack of the)se factors.
Sep 25 2014
prev sibling next sibling parent "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 22:48:12 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 2:03 PM, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly 
 wrote:

 lack of attention paid to tightening up what we've already 
 got and
 deprecating old stuff that no one wants any more.  And 
 inconsistency
 in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
I'm not sharing that feeling at all. From that perspective all languages are in need of a "serious cleanup". -- Andrei
BTW, I already have somebody who's sharing my feelings, not looking further.
Sep 25 2014
prev sibling next sibling parent "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 22:48:12 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 2:03 PM, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly 
 wrote:

 lack of attention paid to tightening up what we've already 
 got and
 deprecating old stuff that no one wants any more.  And 
 inconsistency
 in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
I'm not sharing that feeling at all. From that perspective all languages are in need of a "serious cleanup". -- Andrei
The serious question is at what cost does this un-sharing come. The cost of the always-niche (aka "nice try") language?
Sep 25 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 03:48:11PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 9/25/14, 2:03 PM, eles wrote:
On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:

lack of attention paid to tightening up what we've already got and
deprecating old stuff that no one wants any more.  And inconsistency
in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
I'm not sharing that feeling at all. From that perspective all languages are in need of a "serious cleanup". -- Andrei
I've been thinking that it might do us some good if we aren't as paranoid about breaking old code, as long as (1) it's to fix a language design flaw, (2) it exposes potentially (or outright) buggy user code, (3) users are warned well ahead of time, followed by a full deprecation cycle, and (4) optionally, there's a tool, either fully or partially automated, that can upgrade old codebases. I mean, enterprises use deprecation cycles with their products all the time, and we don't hear of customers quitting just because of that. Some of the more vocal customers will voice their unhappiness, but as long as you're willing to work with them and allow them sufficient time to migrate over nicely and phase out the old stuff, they're generally accepting of the process. We've already had offers from D-based organizations asking to please break their code(!) for the sake of fixing language design flaws -- this is already far more than what most enterprise customers are generally willing to put up with, IME. Yet we're doing far less than what enterprises do in order to keep their product up-to-date. We may need to use very long deprecation cycles to keep everyone happy (on the order of 2-3 years perhaps), but doing nothing will only result in absolutely zero improvement even after 10 years. T -- Кто везде - тот нигде.
Sep 25 2014
parent Shammah Chancellor <email domain.com> writes:
On 2014-09-25 23:23:06 +0000, H. S. Teoh via Digitalmars-d said:

 On Thu, Sep 25, 2014 at 03:48:11PM -0700, Andrei Alexandrescu via 
 Digitalmars-d wrote:
 On 9/25/14, 2:03 PM, eles wrote:
 On Tuesday, 23 September 2014 at 14:29:06 UTC, Sean Kelly wrote:
 
 lack of attention paid to tightening up what we've already got and
 deprecating old stuff that no one wants any more.  And inconsistency
 in how things work in the language.
The feeling that I have is that if D2 does not get a serious cleanup at this stage, then D3 must follow quickly (and such move will be unstoppable), otherwise people will fall back to D1 or C++next.
I'm not sharing that feeling at all. From that perspective all languages are in need of a "serious cleanup". -- Andrei
I mean, enterprises use deprecation cycles with their products all the time, and we don't hear of customers quitting just because of that. Some of the more vocal customers will voice their unhappiness, but as long as you're willing to work with them and allow them sufficient time to migrate over nicely and phase out the old stuff, they're generally accepting of the process.
Unless you're Oracle -- in which case you end up with a horrible amalgamation of poorly thought out features. Features which work in such narrow cases that they're mostly useless. -S
Oct 04 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 25 Sep 2014 15:48:11 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 I'm not sharing that feeling at all. From that perspective all
 languages are in need of a "serious cleanup". -- Andrei
and they *are*. yet many languages can't be fixed due to huge amount of written code and user resistance. D is in winning position here: "most of D code yet to be written", yeah. and D users (even corporate users) are very welcome of breaking changes -- if they will make D more consistent and clean. the situation is... weird now. users: "it's ok, we WANT that breaking changes!" language developers: "no, you don't. it's too late."
Sep 25 2014
prev sibling next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 21/09/2014 12:39 a.m., Tofu Ninja wrote:
 GC by default is a big sore point that everyone brings up
I like having a GC by default. But we really need to think of it as a last resort sort of thing.
 "is" expressions are pretty wonky
Ehh yeah, D3 we could I spose.
 Libraries could definitely be split up better
We can still fix that in D2. Just as a note, we do want AST macros for D3. Which will be awesome!
Sep 20 2014
parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 20 September 2014 at 13:31:06 UTC, Rikki Cattermole 
wrote:
 Just as a note, we do want AST macros for D3. Which will be 
 awesome!
What kind of macros? Generic AST macros probably make source to source translation just as difficult as string mixins, don't they? Some simple term-rewriting is probably cleaner?
Sep 20 2014
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 21/09/2014 1:46 a.m., Ola Fosheim Grostad wrote:
 On Saturday, 20 September 2014 at 13:31:06 UTC, Rikki Cattermole wrote:
 Just as a note, we do want AST macros for D3. Which will be awesome!
What kind of macros? Generic AST macros probably make source to source translation just as difficult as string mixins, don't they? Some simple term-rewriting is probably cleaner?
Maybe idk. I haven't investigated this sort of technology. I only know whats being planned for.
Sep 20 2014
prev sibling next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
Oh another bad part of D is the attribute names with some being positive(pure) and some being negative( nogc) and some of them not having an on them.
Sep 20 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/20/14, 7:42 AM, Tofu Ninja wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
Oh another bad part of D is the attribute names with some being positive(pure) and some being negative( nogc) and some of them not having an on them.
If that's among the worst, yay :o). My pet peeves about D gravitate around the lack of a clear approach to escape analysis and the sometimes confusing interaction of qualifiers with constructors. For escape analysis, I think the limited form present inside constructors (that enforces forwarded this() calls to execute exactly once) is plenty fine and should be applied in other places as well. For qualifiers, I think we need an overhaul of the interaction of qualifiers with copy construction. Andrei
Sep 20 2014
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Saturday, 20 September 2014 at 16:54:08 UTC, Andrei 
Alexandrescu wrote:
 On 9/20/14, 7:42 AM, Tofu Ninja wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:
 What do you think are the worst parts of D?
Oh another bad part of D is the attribute names with some being positive(pure) and some being negative( nogc) and some of them not having an on them.
If that's among the worst, yay :o). My pet peeves about D gravitate around the lack of a clear approach to escape analysis and the sometimes confusing interaction of qualifiers with constructors. For escape analysis, I think the limited form present inside constructors (that enforces forwarded this() calls to execute exactly once) is plenty fine and should be applied in other places as well.
I think correct escape analysis + safe + scope == win. BTW, remember all those people that bitch about rvalue to "const ref". D could be a language that provides rvalue to scope ref. 100% safe and practical. How awesome would that be?
Sep 20 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 20 Sep 2014 14:42:47 +0000
Tofu Ninja via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Oh another bad part of D is the attribute names with some being=20
 positive(pure) and some being negative( nogc) and some of them=20
 not having an   on them.
and no way to revert 'final' in classes, for example. or 'static'. i mean that i want something like this: class A { final: void foo () { ... } virtual: void bar () { ... } static: void heh () { ... } // and there is no way to declare non-static fields anymore... } yes, i know that i can use `final {}`, but it looking ugly. and i prefer to declare fields at the end of class declaration. annoying.
Sep 20 2014
prev sibling next sibling parent "K.K." <trampzy yahoo.com> writes:
I watched Jonathan's talk last night, it was really good. I
really like the idea that he wants to make a community designed,
platform independent, game specific language. It's too bad he
doesn't really want to give D more of chance but for what he's
looking for, it'd really need to be designed from the ground up,
as he was saying. Also being that he's slowly rejecting C++, then
I can see D not making sense since D's slogan is pretty much "A
better C++!"; which is accurate but may not be the best marketing
scheme. Also D is a general purpose language, so I guess it
really wouldn't fit his bill.
Though he mentioned Go and Rust a lot, personally I wouldn't
really back those languages either, at least not for games. Rust
maybe if it ever hits version 1.0 I'll take another look at it.
I'm definitely all for the setup he was describing though: all
you need is an IDE/text editor and the compiler. I feel like D
definitely has the potential to be able to meet that setup
someday; I'm just not sure if that setup would be a screw over or
not to people of other fields. The platform independent thing
would be a HUGE plus! Something D isn't too, too far from, but
definitely not there at least of yet.

Overall I'd say D has some significant issues:

- The documentation is awful. If there's a problem you don't know
the answer to, the only three real options are pray that the docs
are correct/up to date, go ask someone who possibly does know, or
magic.

- Bugs. D is like Costco, expect for bugs, and all the bugs are
free.

- More of a suggestion than a problem: Someone needs to do an
O'reilly book series for D; but only after the first two problems
I listed are at least suppressed a bit.

- Very few native D libraries, and also for C/other libraries
almost all D bindings are maintained by usually only one person.
Usually very skilled people, but the work load for library
development and maintenance would probably thrive best with more
people doing more. (The community is probably just too small at
the moment)


....I'm sure there was more, but it's just not coming to me at
the moment.
   I'm definitely interested in Jon's theoretical language but I
don't think it's gonna take away from D for me.

Overall though, I absolutely love D! There's just ALOT of work to
be done. :)
Sep 20 2014
prev sibling next sibling parent reply "Brian Schott" <briancschott gmail.com> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
This compiles. https://github.com/Hackerpilot/Idiotmatic-D/blob/master/idiotmatic.d
Sep 20 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Sep 20, 2014 at 10:53:04PM +0000, Brian Schott via Digitalmars-d wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
What do you think are the worst parts of D?
This compiles. https://github.com/Hackerpilot/Idiotmatic-D/blob/master/idiotmatic.d
+1, lolz. Oh wait, you forgot is(...) syntax (and their various inconsistent semantics)! Plz add. ;-) T -- IBM = I'll Buy Microsoft!
Sep 20 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 20 September 2014 at 23:01:40 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Oh wait, you forgot is(...) syntax (and their various 
 inconsistent semantics)! Plz add. ;-)
I used to hate is()... but then I spent the time to understand it to write about it in my book and now it doesn't bother me anymore. It is a bit weird looking, but there's a perfectly understandable pattern and logic to it; it makes sense once you get it. The two alias syntaxes don't but me either, the old one was fine, the new one is cool too. I'd be annoyed if the old one disappeared cuz of compatibility tho. That said, I want the C style array declaration to die die die, that's just evil. But the old style alias is ok.
 string results[](T) = "I have no idea what I'm doing";
I agree that's just weird though, someone pointed that out to me on IRC and I was even like wtf. I had thought I've seen it all until then.
Sep 20 2014
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 20 September 2014 at 23:07:16 UTC, Adam D. Ruppe 
wrote:
 string results[](T) = "I have no idea what I'm doing";
I agree that's just weird though, someone pointed that out to me on IRC and I was even like wtf. I had thought I've seen it all until then.
I literally don't even know what to expect this to do.
Sep 20 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Tofu Ninja"  wrote in message news:nwjquvwnetifhydfabhg forum.dlang.org...

 On Saturday, 20 September 2014 at 23:07:16 UTC, Adam D. Ruppe wrote:
 string results[](T) = "I have no idea what I'm doing";
I agree that's just weird though, someone pointed that out to me on IRC and I was even like wtf. I had thought I've seen it all until then.
I literally don't even know what to expect this to do.
template results(T) { string[] results = "I have no idea what I'm doing"; } Which won't instantiate because string doesn't convert to string[]. A fun mix of C-style array syntax, shortened template declaration syntax and a semantic error.
Sep 21 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/21/2014 09:05 AM, Daniel Murphy wrote:
 "Tofu Ninja"  wrote in message news:nwjquvwnetifhydfabhg forum.dlang.org...

 On Saturday, 20 September 2014 at 23:07:16 UTC, Adam D. Ruppe wrote:
 string results[](T) = "I have no idea what I'm doing";
I agree that's just weird though, someone pointed that out to me on
IRC > and I was even like wtf. I had thought I've seen it all until then. I literally don't even know what to expect this to do.
template results(T) { string[] results = "I have no idea what I'm doing"; } Which won't instantiate because string doesn't convert to string[]. A fun mix of C-style array syntax, shortened template declaration syntax and a semantic error.
When was int x(T)=2; introduced? Also, C-style array syntax would actually be string results(T)[] = "";.
Sep 21 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Timon Gehr"  wrote in message news:lvmh5b$eo9$1 digitalmars.com... 

 When was int x(T)=2; introduced?
At the same time as enum x(T) = 2; I think.
 Also, C-style array syntax would actually be string results(T)[] = "";.
Nah, array type suffix goes before the template argument list.
Sep 22 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/22/2014 03:26 PM, Daniel Murphy wrote:
 "Timon Gehr"  wrote in message news:lvmh5b$eo9$1 digitalmars.com...
 When was int x(T)=2; introduced?
At the same time as enum x(T) = 2; I think. ...
Is this documented?
 Also, C-style array syntax would actually be string results(T)[] = "";.
Nah, array type suffix goes before the template argument list.
It is results!T[2], not results[2]!T.
Sep 22 2014
prev sibling parent "Brian Schott" <briancschott gmail.com> writes:
On Saturday, 20 September 2014 at 23:07:16 UTC, Adam D. Ruppe 
wrote:
 I agree that's just weird though, someone pointed that out to 
 me on IRC and I was even like wtf. I had thought I've seen it 
 all until then.
The people who write books and autocompletion engines look at things the compiler accepts and say "WTF!?". I think that's the worst thing you can say about D.
Sep 20 2014
prev sibling next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 20 September 2014 at 22:53:05 UTC, Brian Schott 
wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:
 What do you think are the worst parts of D?
This compiles. https://github.com/Hackerpilot/Idiotmatic-D/blob/master/idiotmatic.d
I laughed extremely hard at this, wow. Yeah that definitely highlight A LOT of problems.
Sep 20 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/20/2014 3:53 PM, Brian Schott wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
This compiles. https://github.com/Hackerpilot/Idiotmatic-D/blob/master/idiotmatic.d
https://github.com/D-Programming-Language/dmd/pull/4021 produces: test.d(7): Warning: instead of C-style 'T id[]' syntax, use D-style 'T[]' id syntax test.d(8): Warning: instead of C-style 'T id[]' syntax, use D-style 'T[]' id syntax test.d(11): Warning: instead of C-style 'T id[exp]' syntax, use D-style 'T[exp] id' syntax test.d(15): Warning: instead of C-style 'T id[type]' syntax, use D-style 'T[type]' id syntax test.d(57): Warning: instead of C-style 'T id[]' syntax, use D-style 'T[]' id syntax test.d(94): Warning: instead of C-style 'T id[]' syntax, use D-style 'T[]' id syntax test.d(103): Warning: instead of C-style 'T id[]' syntax, use D-style 'T[]' id syntax I.e. resolves 7 of them. :-)
Sep 23 2014
next sibling parent "Brian Schott" <briancschott gmail.com> writes:
On Wednesday, 24 September 2014 at 06:29:20 UTC, Walter Bright 
wrote:
 https://github.com/D-Programming-Language/dmd/pull/4021
I'm pleasantly surprised that the decision has been made to fix that. I thought we'd be stuck with them forever.
Sep 23 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 24 September 2014 at 06:29:20 UTC, Walter Bright 
wrote:
 https://github.com/D-Programming-Language/dmd/pull/4021
That can be a good case to start with dfix. Its first task can be rewrite of C-style declarations to D-style.
Sep 24 2014
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Wednesday, 24 September 2014 at 11:20:24 UTC, Kagamin wrote:
 On Wednesday, 24 September 2014 at 06:29:20 UTC, Walter Bright 
 wrote:
 https://github.com/D-Programming-Language/dmd/pull/4021
That can be a good case to start with dfix. Its first task can be rewrite of C-style declarations to D-style.
+1
Sep 24 2014
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
The regressions! https://issues.dlang.org/buglist.cgi?bug_severity=regression&list_id=106988&resolution=--- I filed over half of those...
Sep 20 2014
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Sunday, 21 September 2014 at 00:07:36 UTC, Vladimir Panteleev 
wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:
 What do you think are the worst parts of D?
The regressions! https://issues.dlang.org/buglist.cgi?bug_severity=regression&list_id=106988&resolution=--- I filed over half of those...
I guess you found them using your own code base? Maybe it would make sense to add one or more larger projects to the autotester, in addition to the unit tests. They don't necessarily need to be blocking, just a notice "hey, your PR broke this and that project" would surely be helpful to detect the breakages early on.
Sep 21 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Sep 21, 2014 at 08:49:38AM +0000, via Digitalmars-d wrote:
 On Sunday, 21 September 2014 at 00:07:36 UTC, Vladimir Panteleev wrote:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
What do you think are the worst parts of D?
The regressions! https://issues.dlang.org/buglist.cgi?bug_severity=regression&list_id=106988&resolution=--- I filed over half of those...
I guess you found them using your own code base? Maybe it would make sense to add one or more larger projects to the autotester, in addition to the unit tests. They don't necessarily need to be blocking, just a notice "hey, your PR broke this and that project" would surely be helpful to detect the breakages early on.
This has been suggested before. The problem is resources. If you're willing to donate equipment for running these tests, it would be greatly appreciated, I believe. For my part, I regularly try compiling my own projects with git HEAD, and filing any regressions I find. T -- Arise, you prisoners of Windows / Arise, you slaves of Redmond, Wash, / The day and hour soon are coming / When all the IT folks say "Gosh!" / It isn't from a clever lawsuit / That Windowsland will finally fall, / But thousands writing open source code / Like mice who nibble through a wall. -- The Linux-nationale by Greg Baker
Sep 21 2014
next sibling parent reply "luminousone" <rd.hunt gmail.com> writes:
On Sunday, 21 September 2014 at 22:17:59 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Sun, Sep 21, 2014 at 08:49:38AM +0000, via Digitalmars-d 
 wrote:
 On Sunday, 21 September 2014 at 00:07:36 UTC, Vladimir 
 Panteleev wrote:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
wrote:
What do you think are the worst parts of D?
The regressions! https://issues.dlang.org/buglist.cgi?bug_severity=regression&list_id=106988&resolution=--- I filed over half of those...
I guess you found them using your own code base? Maybe it would make sense to add one or more larger projects to the autotester, in addition to the unit tests. They don't necessarily need to be blocking, just a notice "hey, your PR broke this and that project" would surely be helpful to detect the breakages early on.
This has been suggested before. The problem is resources. If you're willing to donate equipment for running these tests, it would be greatly appreciated, I believe. For my part, I regularly try compiling my own projects with git HEAD, and filing any regressions I find. T
What is needed?
Sep 22 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/22/2014 10:16 AM, luminousone wrote:
 What is needed?
The people who maintain large projects need to try them out with the beta compilers and file any regressions.
Sep 23 2014
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wednesday, 24 September 2014 at 04:00:06 UTC, Walter Bright 
wrote:
 On 9/22/2014 10:16 AM, luminousone wrote:
 What is needed?
The people who maintain large projects need to try them out with the beta compilers and file any regressions.
Question: What's the point of testing betas if the release will occur even with known regressions? Blocking a pull being merged would be much more efficient than dealing with a pull merged long ago that by release time is difficult to revert.
Sep 23 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 9:12 PM, Vladimir Panteleev wrote:
 On Wednesday, 24 September 2014 at 04:00:06 UTC, Walter Bright wrote:
 On 9/22/2014 10:16 AM, luminousone wrote:
 What is needed?
The people who maintain large projects need to try them out with the beta compilers and file any regressions.
Question: What's the point of testing betas if the release will occur even with known regressions?
Framing the question that way implies that all regressions are equally deleterious. But this isn't true at all - some regressions are disastrous, some are just minor nits. Delaying the release has its costs, too, as it may fix a number of serious problems. It's a balancing act. We shouldn't hamstring our ability to do what is best by conforming to arbitrary rules whether they are right or wrong for the circumstances.
 Blocking a pull being merged would be much more efficient
 than dealing with a pull merged long ago that by release time is difficult to
 revert.
Sure. I would block pulls that produce known regressions. The earlier regressions are known the better. But it is a bit unreasonable to expect large project maintainers to rebuild and check for bugs every day. It's why we have a beta test program.
Sep 23 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 24/09/14 06:31, Walter Bright wrote:

 But it is a bit unreasonable to expect
 large project maintainers to rebuild and check for bugs every day. It's
 why we have a beta test program.
The solution is to make it automatic. -- /Jacob Carlborg
Sep 23 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 11:24 PM, Jacob Carlborg wrote:
 On 24/09/14 06:31, Walter Bright wrote:

 But it is a bit unreasonable to expect
 large project maintainers to rebuild and check for bugs every day. It's
 why we have a beta test program.
The solution is to make it automatic.
There's no such thing as automatic testing of someone's moving target large project with another moving compiler target. Heck, the dmd release package build scripts break every single release cycle.
Sep 23 2014
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wednesday, 24 September 2014 at 06:57:14 UTC, Walter Bright
wrote:
 On 9/23/2014 11:24 PM, Jacob Carlborg wrote:
 On 24/09/14 06:31, Walter Bright wrote:

 But it is a bit unreasonable to expect
 large project maintainers to rebuild and check for bugs every 
 day. It's
 why we have a beta test program.
The solution is to make it automatic.
There's no such thing as automatic testing of someone's moving target large project with another moving compiler target.
It doesn't exist because no one has created it yet :)
 Heck, the dmd release package build scripts break every single 
 release cycle.
Digger succeeds in building all D versions in the last few years. It doesn't build a complete package like the packaging scripts, but the process of just building the compiler and standard library is fairly stable.
Sep 24 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 2:56 AM, Vladimir Panteleev wrote:
 On Wednesday, 24 September 2014 at 06:57:14 UTC, Walter Bright
 wrote:
 On 9/23/2014 11:24 PM, Jacob Carlborg wrote:
 On 24/09/14 06:31, Walter Bright wrote:

 But it is a bit unreasonable to expect
 large project maintainers to rebuild and check for bugs every day. It's
 why we have a beta test program.
The solution is to make it automatic.
There's no such thing as automatic testing of someone's moving target large project with another moving compiler target.
It doesn't exist because no one has created it yet :)
I've never heard of a non-trivial project that didn't have constant breakage of its build system. All kinds of reasons - add a file, forget to add it to the manifest. Change the file contents, neglect to update dependencies. Add new dependencies on some script, script fails to run on one configuration. And on and on.
 Heck, the dmd release package build scripts break every single release cycle.
Digger succeeds in building all D versions in the last few years. It doesn't build a complete package like the packaging scripts, but the process of just building the compiler and standard library is fairly stable.
Building of the compiler/library itself is stable because the autotester won't pass it if they won't build. That isn't the problem - the problem is the package scripts fail. (This is why I want the package building to be part of the autotester.)
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 03:16:58AM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/24/2014 2:56 AM, Vladimir Panteleev wrote:
On Wednesday, 24 September 2014 at 06:57:14 UTC, Walter Bright
wrote:
On 9/23/2014 11:24 PM, Jacob Carlborg wrote:
On 24/09/14 06:31, Walter Bright wrote:

But it is a bit unreasonable to expect large project maintainers
to rebuild and check for bugs every day. It's why we have a beta
test program.
The solution is to make it automatic.
There's no such thing as automatic testing of someone's moving target large project with another moving compiler target.
It doesn't exist because no one has created it yet :)
I've never heard of a non-trivial project that didn't have constant breakage of its build system. All kinds of reasons - add a file, forget to add it to the manifest. Change the file contents, neglect to update dependencies. Add new dependencies on some script, script fails to run on one configuration. And on and on.
Most (all?) of these issues are solved by using a modern build system. (No, make is not a modern build system.)
Heck, the dmd release package build scripts break every single
release cycle.
Digger succeeds in building all D versions in the last few years. It doesn't build a complete package like the packaging scripts, but the process of just building the compiler and standard library is fairly stable.
Building of the compiler/library itself is stable because the autotester won't pass it if they won't build. That isn't the problem - the problem is the package scripts fail. (This is why I want the package building to be part of the autotester.)
That's a good idea. Packaging the compiler toolchain should be automated so that we don't have a packaging crisis every other release when inevitably some script fails to do what we thought it would, or git got itself into one of those wonderful obscure strange states that only an expert can untangle. T -- Trying to define yourself is like trying to bite your own teeth. -- Alan Watts
Sep 24 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 12:20 PM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 03:16:58AM -0700, Walter Bright via Digitalmars-d
wrote:
 On 9/24/2014 2:56 AM, Vladimir Panteleev wrote:
 On Wednesday, 24 September 2014 at 06:57:14 UTC, Walter Bright
 wrote:
 On 9/23/2014 11:24 PM, Jacob Carlborg wrote:
 On 24/09/14 06:31, Walter Bright wrote:

 But it is a bit unreasonable to expect large project maintainers
 to rebuild and check for bugs every day. It's why we have a beta
 test program.
The solution is to make it automatic.
There's no such thing as automatic testing of someone's moving target large project with another moving compiler target.
It doesn't exist because no one has created it yet :)
I've never heard of a non-trivial project that didn't have constant breakage of its build system. All kinds of reasons - add a file, forget to add it to the manifest. Change the file contents, neglect to update dependencies. Add new dependencies on some script, script fails to run on one configuration. And on and on.
Most (all?) of these issues are solved by using a modern build system. (No, make is not a modern build system.)
 Heck, the dmd release package build scripts break every single
 release cycle.
Digger succeeds in building all D versions in the last few years. It doesn't build a complete package like the packaging scripts, but the process of just building the compiler and standard library is fairly stable.
Building of the compiler/library itself is stable because the autotester won't pass it if they won't build. That isn't the problem - the problem is the package scripts fail. (This is why I want the package building to be part of the autotester.)
That's a good idea. Packaging the compiler toolchain should be automated so that we don't have a packaging crisis every other release when inevitably some script fails to do what we thought it would, or git got itself into one of those wonderful obscure strange states that only an expert can untangle.
We of course agree on all these good things but it's all vacuous unless somebody champions it. Andrei
Sep 24 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 12:30:23PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 9/24/14, 12:20 PM, H. S. Teoh via Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 03:16:58AM -0700, Walter Bright via Digitalmars-d wrote:
[...]
Building of the compiler/library itself is stable because the
autotester won't pass it if they won't build. That isn't the problem
- the problem is the package scripts fail. (This is why I want the
package building to be part of the autotester.)
That's a good idea. Packaging the compiler toolchain should be automated so that we don't have a packaging crisis every other release when inevitably some script fails to do what we thought it would, or git got itself into one of those wonderful obscure strange states that only an expert can untangle.
We of course agree on all these good things but it's all vacuous unless somebody champions it.
[...] Wasn't Nick Sabalausky working on an automated (or automatable) packaging script some time ago? Whatever happened with that? T -- "Real programmers can write assembly code in any language. :-)" -- Larry Wall
Sep 24 2014
parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wednesday, 24 September 2014 at 19:38:44 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Wasn't Nick Sabalausky working on an automated (or automatable)
 packaging script some time ago? Whatever happened with that?
I think that's the one that keeps breaking. https://github.com/D-Programming-Language/installer/tree/master/create_dmd_release
Sep 24 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-24 12:16, Walter Bright wrote:

 I've never heard of a non-trivial project that didn't have constant
 breakage of its build system. All kinds of reasons - add a file, forget
 to add it to the manifest. Change the file contents, neglect to update
 dependencies. Add new dependencies on some script, script fails to run
 on one configuration. And on and on.
Again, if changing the file contents breaks the build system you're doing it very, very wrong. -- /Jacob Carlborg
Sep 24 2014
next sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
wrote:
 On 2014-09-24 12:16, Walter Bright wrote:

 I've never heard of a non-trivial project that didn't have 
 constant
 breakage of its build system. All kinds of reasons - add a 
 file, forget
 to add it to the manifest. Change the file contents, neglect 
 to update
 dependencies. Add new dependencies on some script, script 
 fails to run
 on one configuration. And on and on.
Again, if changing the file contents breaks the build system you're doing it very, very wrong.
People do it very, very wrong all the time - that's the problem :) Build systems are felt by most developers to be a tax they have to pay to do what they want to do, which is write code and solve non-build-related problems. Unfortunately, build engineering is effectively a specialty of its own when you step outside the most trivial of systems. It's really no surprise how few people can get it right - most people can't even agree on what a build system is supposed to do...
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 07:36:05PM +0000, Cliff via Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
 wrote:
On 2014-09-24 12:16, Walter Bright wrote:

I've never heard of a non-trivial project that didn't have constant
breakage of its build system. All kinds of reasons - add a file,
forget to add it to the manifest. Change the file contents, neglect
to update dependencies. Add new dependencies on some script, script
fails to run on one configuration. And on and on.
Again, if changing the file contents breaks the build system you're doing it very, very wrong.
People do it very, very wrong all the time - that's the problem :) Build systems are felt by most developers to be a tax they have to pay to do what they want to do, which is write code and solve non-build-related problems.
That's unfortunate indeed. I wish I could inspire them as to how cool a properly-done build system can be. Automatic parallel building, for example. Fully-reproducible, incremental builds (never ever do `make clean` again). Automatic build + packaging in a single command. Incrementally *updating* packaging in a single command. Automatic dependency discovery. And lots more. A lot of this technology actually already exists. The problem is that still too many people think "make" whenever they hear "build system". Make is but a poor, antiquated caricature of what modern build systems can do. Worse is that most people are resistant to replacing make because of inertia. (Not realizing that by not throwing out make, they're subjecting themselves to a lifetime of unending, unnecessary suffering.)
 Unfortunately, build engineering is effectively a specialty of its own
 when you step outside the most trivial of systems.  It's really no
 surprise how few people can get it right - most people can't even
 agree on what a build system is supposed to do...
It's that bad, huh? At its most fundamental level, a build system is really nothing but a dependency management system. You have a directed, acyclic graph of objects that are built from other objects, and a command which takes said other objects as input, and produces the target object(s) as output. The build system takes as input this dependency graph, and runs the associated commands in topological order to produce the product(s). A modern build system can parallelize independent steps automatically. None of this is specific to compiling programs, in fact, it works for any process that takes a set of inputs and incrementally derives intermediate products until the final set of products are produced. Although the input is the (entire) dependency graph, it's not desirable to specify this graph explicitly (it's far too big in non-trivial projects); so most build systems offer ways of automatically deducing dependencies. Usually this is done by scanning the inputs, and modern build systems would offer ways for the user to define new scanning methods for new input types. One particularly clever system, Tup (http://gittup.org/tup/), uses OS call proxying to discover the *exact* set of inputs and outputs for a given command, including hidden dependencies (like reading a compiler configuration file that may change compiler behaviour) that most people don't even know about. It's also not desirable to have to derive all products from its original inputs all the time; what hasn't changed shouldn't need to be re-processed (we want incremental builds). So modern build systems implement some way of detecting when a node in the dependency graph has changed, thereby requiring all derived products downstream to be rebuilt. The most unreliable method is to scan for file change timestamps (make). A reliable (but slow) method is to compare file hash checksums. Tup uses OS filesystem change notifications to detect changes, thereby cutting out the scanning overhead, which can be quite large in complex projects (but it may be unreliable if the monitoring daemon isn't running / after rebooting). These are all just icing on the cake; the fundamental core of a build system is basically dependency graph management. T -- Political correctness: socially-sanctioned hypocrisy.
Sep 24 2014
next sibling parent "Cliff" <cliff.s.hudson gmail.com> writes:
On Wednesday, 24 September 2014 at 20:12:40 UTC, H. S. Teoh via
Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 07:36:05PM +0000, Cliff via 
 Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
 wrote:
On 2014-09-24 12:16, Walter Bright wrote:

I've never heard of a non-trivial project that didn't have 
constant
breakage of its build system. All kinds of reasons - add a 
file,
forget to add it to the manifest. Change the file contents, 
neglect
to update dependencies. Add new dependencies on some script, 
script
fails to run on one configuration. And on and on.
Again, if changing the file contents breaks the build system you're doing it very, very wrong.
People do it very, very wrong all the time - that's the problem :) Build systems are felt by most developers to be a tax they have to pay to do what they want to do, which is write code and solve non-build-related problems.
That's unfortunate indeed. I wish I could inspire them as to how cool a properly-done build system can be. Automatic parallel building, for example. Fully-reproducible, incremental builds (never ever do `make clean` again). Automatic build + packaging in a single command. Incrementally *updating* packaging in a single command. Automatic dependency discovery. And lots more. A lot of this technology actually already exists. The problem is that still too many people think "make" whenever they hear "build system". Make is but a poor, antiquated caricature of what modern build systems can do. Worse is that most people are resistant to replacing make because of inertia. (Not realizing that by not throwing out make, they're subjecting themselves to a lifetime of unending, unnecessary suffering.)
 Unfortunately, build engineering is effectively a specialty of 
 its own
 when you step outside the most trivial of systems.  It's 
 really no
 surprise how few people can get it right - most people can't 
 even
 agree on what a build system is supposed to do...
It's that bad, huh? At its most fundamental level, a build system is really nothing but a dependency management system. You have a directed, acyclic graph of objects that are built from other objects, and a command which takes said other objects as input, and produces the target object(s) as output. The build system takes as input this dependency graph, and runs the associated commands in topological order to produce the product(s). A modern build system can parallelize independent steps automatically. None of this is specific to compiling programs, in fact, it works for any process that takes a set of inputs and incrementally derives intermediate products until the final set of products are produced. Although the input is the (entire) dependency graph, it's not desirable to specify this graph explicitly (it's far too big in non-trivial projects); so most build systems offer ways of automatically deducing dependencies. Usually this is done by scanning the inputs, and modern build systems would offer ways for the user to define new scanning methods for new input types. One particularly clever system, Tup (http://gittup.org/tup/), uses OS call proxying to discover the *exact* set of inputs and outputs for a given command, including hidden dependencies (like reading a compiler configuration file that may change compiler behaviour) that most people don't even know about. It's also not desirable to have to derive all products from its original inputs all the time; what hasn't changed shouldn't need to be re-processed (we want incremental builds). So modern build systems implement some way of detecting when a node in the dependency graph has changed, thereby requiring all derived products downstream to be rebuilt. The most unreliable method is to scan for file change timestamps (make). A reliable (but slow) method is to compare file hash checksums. Tup uses OS filesystem change notifications to detect changes, thereby cutting out the scanning overhead, which can be quite large in complex projects (but it may be unreliable if the monitoring daemon isn't running / after rebooting). These are all just icing on the cake; the fundamental core of a build system is basically dependency graph management. T
Yes, Google in fact implemented must of this for their internal build systems, I am led to believe. I have myself written such a system before. In fact, the first project I have been working on in D is exactly this, using OS call interception for validating/discovering dependencies, building execution graphs, etc. I haven't seen TUP before, thanks for pointing it out.
Sep 24 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 That's unfortunate indeed. I wish I could inspire them as to how cool a
 properly-done build system can be.
[snip] That's all nice. However: (1) the truth is there's no clear modern build tool that has "won" over make; oh there's plenty of them, but each has its own quirks that makes it tenuous to use; (2) any build system for a project of nontrivial size needs a person/team minding it - never saw such a thing as it's just all automated and it works; (3) especially if the build system is not that familiar, the role of the build czar is all the more important. So the reality is quite a bit more complicated than the shiny city on a hill you describe. Andrei
Sep 24 2014
next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 24 September 2014 at 21:12:15 UTC, Andrei 
Alexandrescu wrote:
 On 9/24/14, 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 That's unfortunate indeed. I wish I could inspire them as to 
 how cool a
 properly-done build system can be.
[snip] That's all nice. However: (1) the truth is there's no clear modern build tool that has "won" over make; oh there's plenty of them, but each has its own quirks that makes it tenuous to use; (2) any build system for a project of nontrivial size needs a person/team minding it - never saw such a thing as it's just all automated and it works; (3) especially if the build system is not that familiar, the role of the build czar is all the more important. So the reality is quite a bit more complicated than the shiny city on a hill you describe. Andrei
It depends on who you ask, I guess. I don't know what the definition of "has won" here is. make is certainly widespread, but so is C, Java and Javascript and I don't have much respect for those. I wouldn't use make again unless external forces made me. For one, it's slower than some of the alternatives, which admittedly only matters for larger builds (i.e. somebody's personal project on Github isn't ever going to see the difference, but a company project will). I'm saying this because I'm actively working on changing my company's existing build systems and did a migration from autotools to CMake in the past. I also looked at what was available for C/C++ recently and concluded it was better to go with the devil I know, namely CMake. From what I know, premake is pretty good (but I haven't used it) and tup looks good for arbritary rules (and large projects), but the ease of using CMake (despite its ridiculously bad scripting language) beats it. For me anyway. BTW, I totally agree with 2) above. Build systems are sort of simple but not really and frequently (always?) balloon out of proportion. As alluded to above, I've been that czar. If I were to write a build system today that had to spell out all of its commands, I'd go with tup or Ninja. That CMake has support for Ninja is the icing on the cake for me. I wrote a Ninja build system generator the other day, that thing is awesome. Make? I'd be just as likely to pick ant. Which I wouldn't. Atila P.S. I've thought of writing a build system in D, for which the configuration language would be D. I still might. Right now, dub is serving my needs. P.S.S autotools is the worse GNU project I know of
Sep 24 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 3:18 PM, Atila Neves wrote:
 On Wednesday, 24 September 2014 at 21:12:15 UTC, Andrei Alexandrescu wrote:
 On 9/24/14, 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 That's unfortunate indeed. I wish I could inspire them as to how cool a
 properly-done build system can be.
[snip] That's all nice. However: (1) the truth is there's no clear modern build tool that has "won" over make; oh there's plenty of them, but each has its own quirks that makes it tenuous to use; (2) any build system for a project of nontrivial size needs a person/team minding it - never saw such a thing as it's just all automated and it works; (3) especially if the build system is not that familiar, the role of the build czar is all the more important. So the reality is quite a bit more complicated than the shiny city on a hill you describe. Andrei
It depends on who you ask, I guess. I don't know what the definition of "has won" here is.
Simple: ask 10 random engineers "we need a build system". There's no dominant answer. Well except maybe for make :o). Case in point: we have two at Facebook, both created in house. But even that's beside the point. The plea here implies D devs are enamored with make and won't change their ways. Not at all! If there were a better thing with a strong champion, yay to that. But we can't be volunteered to use the build system that somebody else likes. Andrei
Sep 24 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 10:18:29PM +0000, Atila Neves via Digitalmars-d wrote:
[...]
 If I were to write a build system today that had to spell out all of
 its commands, I'd go with tup or Ninja. That CMake has support for
 Ninja is the icing on the cake for me. I wrote a Ninja build system
 generator the other day, that thing is awesome.
[...]
 P.S. I've thought of writing a build system in D, for which the
 configuration language would be D. I still might. Right now, dub is
 serving my needs.
I've been thinking of that too! I have in mind a hybrid between tup and SCons, integrating the best ideas of both and discarding the bad parts. For example, SCons is notoriously bad at scalability: the need to scan huge directory structures of large projects when all you want is to rebuild a tiny subdirectory, is disappointing. This part should be replaced by Tup-style OS file change notifications. However, Tup requires arcane shell commands to get anything done -- that's good if you're a Bash guru, but most people are not. For this, I find that SCon's architecture of fully-customizable plugins may work best: ship the system with prebaked rules for common tasks like compiling C/C++/D/Java/etc programs, packaging into tarballs / zips, etc., and expose a consistent API for users to make their own rules where applicable. If the scripting language is D, that opens up a whole new realm of possibilities like using introspection to auto-derive build dependencies, which would be so cool it'd freeze the sun. Now throw in things like built-in parallelization ala SCons (I'm not sure if tup does that too, I suspect it does), 100%-reproducible builds, auto-packaging, etc., and we might have a contender for Andrei's "winner" build system.
 P.S.S autotools is the worse GNU project I know of
+100! It's a system of hacks built upon patches to broken systems built upon other hacks, a veritable metropolis of cards that will entirely collapse at the slightest missing toothpick in your shell environment / directory structure / stray object files or makefiles leftover from previous builds, thanks to 'make'. It's pretty marvelous for what it does -- autoconfigure complex system-dependent parameters for every existing flavor of Unix that you've never heard of -- when it works, that is. When it doesn't, you're in for days -- no, weeks -- no, months, of hair-pulling frustration trying to figure out where in the metropolis of cards the missing toothpick went. The error messages help -- in the same way stray hair or disturbed sand helps in a crime investigation -- if you know how to interpret them. Which ordinary people don't. T -- Philosophy: how to make a career out of daydreaming.
Sep 24 2014
next sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 10:18:29PM +0000, Atila Neves via 
 Digitalmars-d wrote:
 [...]
 If I were to write a build system today that had to spell out 
 all of
 its commands, I'd go with tup or Ninja. That CMake has support 
 for
 Ninja is the icing on the cake for me. I wrote a Ninja build 
 system
 generator the other day, that thing is awesome.
[...]
 P.S. I've thought of writing a build system in D, for which the
 configuration language would be D. I still might. Right now, 
 dub is
 serving my needs.
I've been thinking of that too! I have in mind a hybrid between tup and SCons, integrating the best ideas of both and discarding the bad parts. For example, SCons is notoriously bad at scalability: the need to scan huge directory structures of large projects when all you want is to rebuild a tiny subdirectory, is disappointing. This part should be replaced by Tup-style OS file change notifications. However, Tup requires arcane shell commands to get anything done -- that's good if you're a Bash guru, but most people are not. For this, I find that SCon's architecture of fully-customizable plugins may work best: ship the system with prebaked rules for common tasks like compiling C/C++/D/Java/etc programs, packaging into tarballs / zips, etc., and expose a consistent API for users to make their own rules where applicable. If the scripting language is D, that opens up a whole new realm of possibilities like using introspection to auto-derive build dependencies, which would be so cool it'd freeze the sun. Now throw in things like built-in parallelization ala SCons (I'm not sure if tup does that too, I suspect it does), 100%-reproducible builds, auto-packaging, etc., and we might have a contender for Andrei's "winner" build system.
 P.S.S autotools is the worse GNU project I know of
+100! It's a system of hacks built upon patches to broken systems built upon other hacks, a veritable metropolis of cards that will entirely collapse at the slightest missing toothpick in your shell environment / directory structure / stray object files or makefiles leftover from previous builds, thanks to 'make'. It's pretty marvelous for what it does -- autoconfigure complex system-dependent parameters for every existing flavor of Unix that you've never heard of -- when it works, that is. When it doesn't, you're in for days -- no, weeks -- no, months, of hair-pulling frustration trying to figure out where in the metropolis of cards the missing toothpick went. The error messages help -- in the same way stray hair or disturbed sand helps in a crime investigation -- if you know how to interpret them. Which ordinary people don't. T
If you have a passion and interest in this space and would like to collaborate, I would be thrilled. We can also split this discussion off of this thread since it is not D specific.
Sep 24 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 11:02:51PM +0000, Cliff via Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via
 Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 10:18:29PM +0000, Atila Neves via Digitalmars-d
wrote:
[...]
If I were to write a build system today that had to spell out all of
its commands, I'd go with tup or Ninja. That CMake has support for
Ninja is the icing on the cake for me. I wrote a Ninja build system
generator the other day, that thing is awesome.
[...]
P.S. I've thought of writing a build system in D, for which the
configuration language would be D. I still might. Right now, dub is
serving my needs.
I've been thinking of that too! I have in mind a hybrid between tup and SCons, integrating the best ideas of both and discarding the bad parts.
[...]
 If you have a passion and interest in this space and would like to
 collaborate, I would be thrilled.  We can also split this discussion
 off of this thread since it is not D specific.
I'm interested. What about Atila? T -- Written on the window of a clothing store: No shirt, no shoes, no service.
Sep 24 2014
next sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Wednesday, 24 September 2014 at 23:20:00 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 11:02:51PM +0000, Cliff via 
 Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via
 Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 10:18:29PM +0000, Atila Neves via 
Digitalmars-d
wrote:
[...]
If I were to write a build system today that had to spell 
out all of
its commands, I'd go with tup or Ninja. That CMake has 
support for
Ninja is the icing on the cake for me. I wrote a Ninja build 
system
generator the other day, that thing is awesome.
[...]
P.S. I've thought of writing a build system in D, for which 
the
configuration language would be D. I still might. Right now, 
dub is
serving my needs.
I've been thinking of that too! I have in mind a hybrid between tup and SCons, integrating the best ideas of both and discarding the bad parts.
[...]
 If you have a passion and interest in this space and would 
 like to
 collaborate, I would be thrilled.  We can also split this 
 discussion
 off of this thread since it is not D specific.
I'm interested. What about Atila? T
Yes, whoever has a passionate interest in this space and (of course) an interest in D. Probably the best thing to do is take this to another forum - I don't want to further pollute this thread. Please g-mail to: cliff s hudson. (I'm assuming you are a human and can figure out the appropriate dotted address from the preceding :) )
Sep 24 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 24 Sep 2014 23:56:24 +0000
schrieb "Cliff" <cliff.s.hudson gmail.com>:

 On Wednesday, 24 September 2014 at 23:20:00 UTC, H. S. Teoh via 
 Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 11:02:51PM +0000, Cliff via 
 Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via
 Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 10:18:29PM +0000, Atila Neves via 
Digitalmars-d
wrote:
[...]
If I were to write a build system today that had to spell 
out all of
its commands, I'd go with tup or Ninja. That CMake has 
support for
Ninja is the icing on the cake for me. I wrote a Ninja build 
system
generator the other day, that thing is awesome.
[...]
P.S. I've thought of writing a build system in D, for which 
the
configuration language would be D. I still might. Right now, 
dub is
serving my needs.
I've been thinking of that too! I have in mind a hybrid between tup and SCons, integrating the best ideas of both and discarding the bad parts.
[...]
 If you have a passion and interest in this space and would 
 like to
 collaborate, I would be thrilled.  We can also split this 
 discussion
 off of this thread since it is not D specific.
I'm interested. What about Atila? T
Yes, whoever has a passionate interest in this space and (of course) an interest in D. Probably the best thing to do is take this to another forum - I don't want to further pollute this thread. Please g-mail to: cliff s hudson. (I'm assuming you are a human and can figure out the appropriate dotted address from the preceding :) )
You do know that your email is in plain text in the news message header? :p -- Marco
Sep 26 2014
parent "Cliff" <cliff.s.hudson gmail.com> writes:
On Friday, 26 September 2014 at 07:56:57 UTC, Marco Leise wrote:
 Am Wed, 24 Sep 2014 23:56:24 +0000

 You do know that your email is in plain text in the news
 message header? :p
Actually I did not, as I am not presently using a newsreader to access the forums, just the web page. I keep forgetting to install a proper reader :) Thanks!
Sep 26 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
 [...]
 If you have a passion and interest in this space and would 
 like to
 collaborate, I would be thrilled.  We can also split this 
 discussion
 off of this thread since it is not D specific.
I'm interested. What about Atila?
Definitely interested. BTW, I agree with everything you said about what would be desireable from a build system. Basically we had pretty much the same idea. :) Atila
Sep 25 2014
parent "Joakim" <dlang joakim.fea.st> writes:
On Thursday, 25 September 2014 at 07:28:10 UTC, Atila Neves wrote:
 [...]
 If you have a passion and interest in this space and would 
 like to
 collaborate, I would be thrilled.  We can also split this 
 discussion
 off of this thread since it is not D specific.
I'm interested. What about Atila?
Definitely interested. BTW, I agree with everything you said about what would be desireable from a build system. Basically we had pretty much the same idea. :)
Glad to hear you guys are going to take a pass at this. :) You might want to check out ekam, a since-abandoned build tool that sounded similar, started by one of the guys behind Protocol Buffers: http://kentonsprojects.blogspot.in/search/label/ekam Maybe he had some ideas you'd want to steal.
Sep 25 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 3:47 PM, H. S. Teoh via Digitalmars-d wrote:
 I've been thinking of that too! I have in mind a hybrid between tup and
 SCons, integrating the best ideas of both and discarding the bad parts.

 For example, SCons is notoriously bad at scalability: the need to scan
 huge directory structures of large projects when all you want is to
 rebuild a tiny subdirectory, is disappointing. This part should be
 replaced by Tup-style OS file change notifications.

 However, Tup requires arcane shell commands to get anything done --
 that's good if you're a Bash guru, but most people are not.
Well, what I see here is there's no really good build system there. So then how can we interpret your long plea for dropping make like a bad habit and using "a properly-done" build system with these amazing qualities? To quote:
 I wish I could inspire them as to how cool a
 properly-done build system can be. Automatic parallel building, for
 example. Fully-reproducible, incremental builds (never ever do `make
 clean` again). Automatic build + packaging in a single command.
 Incrementally *updating* packaging in a single command. Automatic
 dependency discovery. And lots more. A lot of this technology actually
 already exists. The problem is that still too many people think "make"
 whenever they hear "build system".  Make is but a poor, antiquated
 caricature of what modern build systems can do. Worse is that most
 people are resistant to replacing make because of inertia. (Not
 realizing that by not throwing out make, they're subjecting themselves
 to a lifetime of unending, unnecessary suffering.)
So should we take it that actually that system does not exist but you want to create it? Andrei
Sep 24 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 04:16:20PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 9/24/14, 3:47 PM, H. S. Teoh via Digitalmars-d wrote:
I've been thinking of that too! I have in mind a hybrid between tup
and SCons, integrating the best ideas of both and discarding the bad
parts.

For example, SCons is notoriously bad at scalability: the need to
scan huge directory structures of large projects when all you want is
to rebuild a tiny subdirectory, is disappointing. This part should be
replaced by Tup-style OS file change notifications.

However, Tup requires arcane shell commands to get anything done --
that's good if you're a Bash guru, but most people are not.
Well, what I see here is there's no really good build system there. So then how can we interpret your long plea for dropping make like a bad habit and using "a properly-done" build system with these amazing qualities? To quote:
I wish I could inspire them as to how cool a properly-done build
system can be. Automatic parallel building, for example.
Fully-reproducible, incremental builds (never ever do `make clean`
again). Automatic build + packaging in a single command.
Incrementally *updating* packaging in a single command. Automatic
dependency discovery. And lots more. A lot of this technology
actually already exists. The problem is that still too many people
think "make" whenever they hear "build system".  Make is but a poor,
antiquated caricature of what modern build systems can do. Worse is
that most people are resistant to replacing make because of inertia.
(Not realizing that by not throwing out make, they're subjecting
themselves to a lifetime of unending, unnecessary suffering.)
So should we take it that actually that system does not exist but you want to create it?
[...] You're misrepresenting my position. *In spite of their current flaws*, modern build systems like SCons and Tup already far exceed make in their basic capabilities and reliability. Your argument reduces to declining to replace a decrepit car that breaks down every other day with a new one, just because the new car isn't a flawlessly perfect epitome of engineering yet and still needs a little maintenance every half a year. T -- Indifference will certainly be the downfall of mankind, but who cares? -- Miquel van Smoorenburg
Sep 24 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:
 You're misrepresenting my position.*In spite of their current flaws*,
 modern build systems like SCons and Tup already far exceed make in their
 basic capabilities and reliability.
Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: "Be the change you want to see in dlang's build system" :o). -- Andrei
Sep 24 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 05:37:37PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:
You're misrepresenting my position.*In spite of their current flaws*,
modern build systems like SCons and Tup already far exceed make in
their basic capabilities and reliability.
Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: "Be the change you want to see in dlang's build system" :o). -- Andrei
Well, Cliff & I (and whoever's interested) will see what we can do about that. Perhaps in the not-so-distant future we may have a D build tool that can serve as the go-to build tool for D projects. T -- It is the quality rather than the quantity that matters. -- Lucius Annaeus Seneca
Sep 24 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 6:54 PM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 05:37:37PM -0700, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:
 You're misrepresenting my position.*In spite of their current flaws*,
 modern build systems like SCons and Tup already far exceed make in
 their basic capabilities and reliability.
Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: "Be the change you want to see in dlang's build system" :o). -- Andrei
Well, Cliff & I (and whoever's interested) will see what we can do about that. Perhaps in the not-so-distant future we may have a D build tool that can serve as the go-to build tool for D projects.
Sounds like a fun project. In case you'll allow me an opinion: I think dependency management is important for D, but a build tool is highly liable to become a distraction. Other ways to improve dependency management are likely to be more impactful. Andrei
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 07:05:32PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 9/24/14, 6:54 PM, H. S. Teoh via Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 05:37:37PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:
You're misrepresenting my position.*In spite of their current
flaws*, modern build systems like SCons and Tup already far exceed
make in their basic capabilities and reliability.
Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: "Be the change you want to see in dlang's build system" :o). -- Andrei
Well, Cliff & I (and whoever's interested) will see what we can do about that. Perhaps in the not-so-distant future we may have a D build tool that can serve as the go-to build tool for D projects.
Sounds like a fun project. In case you'll allow me an opinion: I think dependency management is important for D, but a build tool is highly liable to become a distraction. Other ways to improve dependency management are likely to be more impactful.
[...] Clearly, the more automatic dependency management can be, the better. In an ideal world, one should be able to say, "here is my source tree, and here's the file that contains main()", and the build tool should automatically figure out all the dependencies as well as how to compile the sources into the final executable. In pseudocode, all one needs to write should in theory be simply: Program("mySuperApp", "src/main.d"); and everything else will be automatically figured out. But of course, this is currently not yet fully practical. So some amount of manual dependency specification will be needed. But the idea is to keep those minimal. T -- This is a tpyo.
Sep 24 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 9:15 PM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 07:05:32PM -0700, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 9/24/14, 6:54 PM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 05:37:37PM -0700, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:
 You're misrepresenting my position.*In spite of their current
 flaws*, modern build systems like SCons and Tup already far exceed
 make in their basic capabilities and reliability.
Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: "Be the change you want to see in dlang's build system" :o). -- Andrei
Well, Cliff & I (and whoever's interested) will see what we can do about that. Perhaps in the not-so-distant future we may have a D build tool that can serve as the go-to build tool for D projects.
Sounds like a fun project. In case you'll allow me an opinion: I think dependency management is important for D, but a build tool is highly liable to become a distraction. Other ways to improve dependency management are likely to be more impactful.
[...] Clearly, the more automatic dependency management can be, the better. In an ideal world, one should be able to say, "here is my source tree, and here's the file that contains main()", and the build tool should automatically figure out all the dependencies as well as how to compile the sources into the final executable. In pseudocode, all one needs to write should in theory be simply: Program("mySuperApp", "src/main.d"); and everything else will be automatically figured out. But of course, this is currently not yet fully practical. So some amount of manual dependency specification will be needed. But the idea is to keep those minimal.
Actually you can't do this for D properly without enlisting the help of the compiler. Scoped import is a very interesting conditional dependency (it is realized only if the template is instantiated). Also, lazy opening of imports is almost guaranteed to have a huge good impact on build times. Your reply confirms my worst fear: you're looking at yet another general build system, of which there are plenty of carcasses rotting in the drought left and right of highway 101. The build system that will be successful for D will cooperate with the compiler, which will give it fine-grained dependency information. Haskell does the same with good results. Andrei
Sep 24 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 9:26 PM, Andrei Alexandrescu wrote:
 The build system that will be successful for D will cooperate with the
compiler,
 which will give it fine-grained dependency information. Haskell does the same
 with good results.
There's far more to a build system than generating executables. And there's more to generating executables than D source files (there may be C files in there, and C++ files, YACC files, and random other files). Heck, dmd uses C code to generated more .c source files. I've seen more than one fabulous build system that couldn't cope with that. Make is the C++ of build systems. It may be ugly, but you can get it to work.
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 09:44:26PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/24/2014 9:26 PM, Andrei Alexandrescu wrote:
The build system that will be successful for D will cooperate with
the compiler, which will give it fine-grained dependency information.
Haskell does the same with good results.
I didn't specify *how* the build system would implement automatic dependency now, did I? :-) Nowhere did I say that the build system will (re)invent its own way of deriving source file dependencies. FYI, Tup is able to tell exactly what file(s) are read by the compiler in compiling a particular program (or source file) automatically, thus its dependency graph is actually accurate, unlike some build systems that depend on source-level scanning, which would lead to the problems you describe with conditional local imports.
 There's far more to a build system than generating executables. And
 there's more to generating executables than D source files (there may
 be C files in there, and C++ files, YACC files, and random other
 files).
 
 Heck, dmd uses C code to generated more .c source files. I've seen
 more than one fabulous build system that couldn't cope with that.
Which build system would that be? I'll be sure to avoid it. :-P I've written SCons scripts that correctly handles automated handling of auto-generated source files. For example, a lex/flex source file gets compiled to a .c source file which in turn compiles to the object file that then gets linked with the executable. Heck, I have a working SCons script that handles the generation of animations from individual image frames which are in turn generated by invocations of povray on scene files programmatically generated by a program that reads script input and polytope definitions in a DSL and computes each scene file. The image generation includes scripted trimming and transparency adjustments of each individual frame, specified *in the build spec* via imagemagick, and the entire process from end to end is fully automatically parallelized by SCons, which is able to correctly sequence each step in a website project that has numerous such generation tasks, interleaving multiple generation procedures as CPUs become free without any breakage in dependencies. This process even optionally includes a final deployment step which copies the generated files into a web directory, and it is able to detect steps for which the products haven't changed from the last run and elide redundant copying of the unchanged files to the web directory, thus preserving last-updated timestamps on the target files. So before you bash modern build systems in favor of make, do take some time to educate yourself about what they're actually capable of. :-) You'll be a lot more convincing then.
 Make is the C++ of build systems. It may be ugly, but you can get it
 to work.
If you like building real airplanes out of Lego pieces, be my guest. Me, I prefer using more suitable tools. :-P T -- The diminished 7th chord is the most flexible and fear-instilling chord. Use it often, use it unsparingly, to subdue your listeners into submission!
Sep 24 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 10:08 PM, H. S. Teoh via Digitalmars-d wrote:
 If you like building real airplanes out of Lego pieces, be my guest. Me,
 I prefer using more suitable tools. :-P
I spend very little time fussing with make. Making it work better (even to 0 cost) will add pretty much nothing to my productivity.
Sep 24 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 10:23:48PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/24/2014 10:08 PM, H. S. Teoh via Digitalmars-d wrote:
If you like building real airplanes out of Lego pieces, be my guest.
Me, I prefer using more suitable tools. :-P
I spend very little time fussing with make. Making it work better (even to 0 cost) will add pretty much nothing to my productivity.
Oh? Let's see. One time, while git bisecting to track down a dmd regression, I was running into all sorts of strange inconsistent behaviour from dmd. After about a good 15-30 mins' worth of frustration, I tracked down the source of the problem to make not cleaning up previous .o files, and thus producing a corrupted dmd which contained a mixture of who knows what versions of each .o left behind from previous git bisect steps. So I realized that I had to do a make clean every time to ensure I'm actually getting the dmd I think I'm getting. Welp, that just invalidated my entire git bisect session so far. So git bisect reset and start over. Had dmd used a reliable build system, I wouldn't have wasted that time, plus I'd have the benefit of incremental builds instead of the extra time spent running make clean, and *then* rebuilding everything from scratch. Yup, it wouldn't *add* to my productivity, but it certainly *would* cut down on my *unproductivity*! Now, dmd's makefile is very much on the 'simple' end of the scale, which I'm sure you'll agree if you've seen the kind of makefiles I have to deal with at work. Being simple means it also doesn't expose many of make's myriad problems. I've had to endure through builds that take 30 minutes to complete for a 1-line code change (and apparently I'm already counted lucky -- I hear of projects whose builds could span hours or even *days* if you're unlucky enough to have to build on a low-end machine), only to find that the final image was corrupted because somewhere in that dense forest of poorly-hackneyed makefiles in the source tree somebody had forgotten to cleanup a stray .so file, which is introducing the wrong versions of the wrong symbols to the wrong places, causing executables to go haywire when deployed. Not to mention that some smart people in the team have decided that needing to 'make clean' every single time following an svn update is "normal" practice, thus every makefile in their subdirectory is completely broken, non-parallizable, and extremely fragile. Hooray for the countless afternoons I spent fixing D bugs instead of doing paid work -- because I've to do yet another `make clean; make` just to be sure any subsequent bugs I find are actual bugs, and not inconsistent builds caused by our beloved make. You guys should be thankful, as otherwise I would've been too productive to have time to fix D bugs. :-P And let's not forget the lovely caching of dependency files from gcc that our makefiles attempt to leverage in order to have more accurate dependency information -- information which is mostly worthless because you have to make clean; make after making major changes anyway -- one time I didn't due to time pressure, and was rewarded with another heisenbug caused by stale .dep files causing some source changes to not be reflected in the build. Oh yeah, spent another day or two trying to figure that one out. Oh, and did I mention the impossibility of parallelizing our builds because of certain aforementioned people who think `make clean; make` is "normal workflow"? I'd hazard to guess I could take a year off work from all the accumulated unproductive times waiting for countless serial builds to complete, where parallelized builds would've saved at least half that time, more on modern PCs. Reluctance to get rid of make is kinda like reluctance to use smart pointers / GC because you enjoy manipulating raw pointers in high-level application code. You can certainly do many things with raw pointers, and do it very efficiently 'cos you've already memorized the various arcane hacks needed to make things work over the years -- recite them in your sleep even. It's certainly more productive than spending downtime learning how to use smart pointers or, God forbid, the GC -- after you discount all the time and effort expended in tracking down null pointer segfaults, dangling pointer problems, memory corruption issues, missing sizeof's in malloc calls, and off-by-1 array bugs, that is. To each his own, I say. :-P T -- People tell me that I'm skeptical, but I don't believe them.
Sep 24 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 11:05 PM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 10:23:48PM -0700, Walter Bright via Digitalmars-d
wrote:
 On 9/24/2014 10:08 PM, H. S. Teoh via Digitalmars-d wrote:
 If you like building real airplanes out of Lego pieces, be my guest.
 Me, I prefer using more suitable tools. :-P
I spend very little time fussing with make. Making it work better (even to 0 cost) will add pretty much nothing to my productivity.
Oh? Let's see. One time, while git bisecting to track down a dmd regression, I was running into all sorts of strange inconsistent behaviour from dmd. After about a good 15-30 mins' worth of frustration, I tracked down the source of the problem to make not cleaning up previous .o files, and thus producing a corrupted dmd which contained a mixture of who knows what versions of each .o left behind from previous git bisect steps. So I realized that I had to do a make clean every time to ensure I'm actually getting the dmd I think I'm getting. Welp, that just invalidated my entire git bisect session so far. So git bisect reset and start over. Had dmd used a reliable build system, I wouldn't have wasted that time, plus I'd have the benefit of incremental builds instead of the extra time spent running make clean, and *then* rebuilding everything from scratch. Yup, it wouldn't *add* to my productivity, but it certainly *would* cut down on my *unproductivity*! Now, dmd's makefile is very much on the 'simple' end of the scale, which I'm sure you'll agree if you've seen the kind of makefiles I have to deal with at work. Being simple means it also doesn't expose many of make's myriad problems. I've had to endure through builds that take 30 minutes to complete for a 1-line code change (and apparently I'm already counted lucky -- I hear of projects whose builds could span hours or even *days* if you're unlucky enough to have to build on a low-end machine), only to find that the final image was corrupted because somewhere in that dense forest of poorly-hackneyed makefiles in the source tree somebody had forgotten to cleanup a stray .so file, which is introducing the wrong versions of the wrong symbols to the wrong places, causing executables to go haywire when deployed. Not to mention that some smart people in the team have decided that needing to 'make clean' every single time following an svn update is "normal" practice, thus every makefile in their subdirectory is completely broken, non-parallizable, and extremely fragile. Hooray for the countless afternoons I spent fixing D bugs instead of doing paid work -- because I've to do yet another `make clean; make` just to be sure any subsequent bugs I find are actual bugs, and not inconsistent builds caused by our beloved make. You guys should be thankful, as otherwise I would've been too productive to have time to fix D bugs. :-P And let's not forget the lovely caching of dependency files from gcc that our makefiles attempt to leverage in order to have more accurate dependency information -- information which is mostly worthless because you have to make clean; make after making major changes anyway -- one time I didn't due to time pressure, and was rewarded with another heisenbug caused by stale .dep files causing some source changes to not be reflected in the build. Oh yeah, spent another day or two trying to figure that one out. Oh, and did I mention the impossibility of parallelizing our builds because of certain aforementioned people who think `make clean; make` is "normal workflow"? I'd hazard to guess I could take a year off work from all the accumulated unproductive times waiting for countless serial builds to complete, where parallelized builds would've saved at least half that time, more on modern PCs. Reluctance to get rid of make is kinda like reluctance to use smart pointers / GC because you enjoy manipulating raw pointers in high-level application code. You can certainly do many things with raw pointers, and do it very efficiently 'cos you've already memorized the various arcane hacks needed to make things work over the years -- recite them in your sleep even. It's certainly more productive than spending downtime learning how to use smart pointers or, God forbid, the GC -- after you discount all the time and effort expended in tracking down null pointer segfaults, dangling pointer problems, memory corruption issues, missing sizeof's in malloc calls, and off-by-1 array bugs, that is. To each his own, I say. :-P
You noted my preference for simple makefiles (even if they tend to get verbose). I've been using make for 30 years now, and rarely have problems with it. Of course, I also eschew using every last feature of make, which too many people feel compelled to do. So no, my makefiles don't consist of "arcane hacks". They're straightforward and rather boring. And I use make -j on posix for parallel builds, it works fine on dmd.
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 11:14:30PM -0700, Walter Bright via Digitalmars-d wrote:
[...]
 You noted my preference for simple makefiles (even if they tend to get
 verbose). I've been using make for 30 years now, and rarely have
 problems with it. Of course, I also eschew using every last feature of
 make, which too many people feel compelled to do. So no, my makefiles
 don't consist of "arcane hacks". They're straightforward and rather
 boring.
 
 And I use make -j on posix for parallel builds, it works fine on dmd.
Well, I *am* grateful that building dmd is as simple as identifying which makefile to use for your platform, and make away. And the fact that building dmd with -j works. Sadly, 95% of non-trivial make-based projects out there require all manner of arcane hacks that break across platforms, requiring patch upon patches (*ahem*autotools*cough*) ostensibly to work, but in practice ends up breaking anyway and you're left with a huge mess to debug. And -j generally doesn't work. :-/ (It either crashes with obscure non-reproducible compile errors caused by race conditions, or gives you a binary that may or may not represent the source code. Sigh.) In fact, one thing that impressed me immensely is the fact that building the dmd toolchain is as simple as it is. I know of no other compiler project that is comparable. Building gcc, for example, is a wondrous thing to behold -- when it works. When it doesn't (which is anytime you dare do the slightest thing not according to the meticulous build instructions)... it's nightmarish. But even then, I *did* run into the problem of non-reproducible builds with dmd. So there's still a blemish there. :-P Makes me want to alias `make` to `make clean; make` just for this alone... Oh wait, I've already done that -- I have a script for pulling git HEAD from github and rebuilding dmd, druntime, phobos, and it *does* run make clean before rebuilding everything! Sigh. The only redeeming thing is that dmd/druntime/phobos builds at lightning speed (relative to, say, the gcc toolchain), so this isn't *too* intolerable a cost. But still. It's one of my pet peeves about make... T -- Many open minds should be closed for repairs. -- K5 user
Sep 24 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 25/09/14 08:39, H. S. Teoh via Digitalmars-d wrote:

 In fact, one thing that impressed me immensely is the fact that building
 the dmd toolchain is as simple as it is. I know of no other compiler
 project that is comparable. Building gcc, for example, is a wondrous
 thing to behold -- when it works. When it doesn't (which is anytime you
 dare do the slightest thing not according to the meticulous build
 instructions)... it's nightmarish.
Yeah, I agree. It's a nice property of DMD.
 But even then, I *did* run into the problem of non-reproducible builds
 with dmd. So there's still a blemish there. :-P  Makes me want to alias
 `make` to `make clean; make` just for this alone
I don't think the "clean" action can be completely excluded. I recently tried to build a project, without running "clean", and got some unexpected errors. Then I remembered I had just installed a new version of the compiler. -- /Jacob Carlborg
Sep 25 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 03:47:22PM +0200, Jacob Carlborg via Digitalmars-d
wrote:
 On 25/09/14 08:39, H. S. Teoh via Digitalmars-d wrote:
[...]
But even then, I *did* run into the problem of non-reproducible
builds with dmd. So there's still a blemish there. :-P  Makes me want
to alias `make` to `make clean; make` just for this alone
I don't think the "clean" action can be completely excluded. I recently tried to build a project, without running "clean", and got some unexpected errors. Then I remembered I had just installed a new version of the compiler.
[...] That's the hallmark of make-based projects. SCons projects, OTOH, almost never needs to do that (except if you totally screwed up your SCons scripts). The super-complicated SCons script that I described in another post? I don't even remember that last *year* when I had to do the equivalent of a clean. I've been updating, branching, merging the workspace for years now, and it Just Builds -- correctly at that. T -- Famous last words: I *think* this will work...
Sep 25 2014
next sibling parent "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 25 September 2014 at 14:25:25 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Thu, Sep 25, 2014 at 03:47:22PM +0200, Jacob Carlborg via 
 Digitalmars-d wrote:
 On 25/09/14 08:39, H. S. Teoh via Digitalmars-d wrote:
[...]
But even then, I *did* run into the problem of 
non-reproducible
builds with dmd. So there's still a blemish there. :-P  Makes 
me want
to alias `make` to `make clean; make` just for this alone
I don't think the "clean" action can be completely excluded. I recently tried to build a project, without running "clean", and got some unexpected errors. Then I remembered I had just installed a new version of the compiler.
[...] That's the hallmark of make-based projects. SCons projects, OTOH, almost never needs to do that (except if you totally screwed up your SCons scripts). The super-complicated SCons script that I described in another post? I don't even remember that last *year* when I had to do the equivalent of a clean. I've been updating, branching, merging the workspace for years now, and it Just Builds -- correctly at that.
+1. I remember the days where I dealt with hand-written makefiles and running "make clean" just to be sure. I don't miss them. If you ever _need_ to type "make clean" then the build system is broken. Simple as. It's analogous to deleting the object file to make sure the compiler does its job properly. Atila
Sep 25 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-25 16:23, H. S. Teoh via Digitalmars-d wrote:

 That's the hallmark of make-based projects.
This was Ninja actually. But how would the build system know I've updated the compiler? -- /Jacob Carlborg
Sep 25 2014
next sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Thursday, 25 September 2014 at 17:42:09 UTC, Jacob Carlborg
wrote:
 On 2014-09-25 16:23, H. S. Teoh via Digitalmars-d wrote:

 That's the hallmark of make-based projects.
This was Ninja actually. But how would the build system know I've updated the compiler?
The compiler is an input to the build rule. Consider the rule: build: $(CC) my.c -o my.o what are the dependencies for the rule "build"? my.c obviously. Anything the compiler accesses during the compilation of my.c. And *the compiler itself*, referenced here as $(CC). From a dependency management standpoint, executables are not special except as running them leads to the discovery of more dependencies than may be statically specified.
Sep 25 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 10:47 AM, Cliff wrote:
 On Thursday, 25 September 2014 at 17:42:09 UTC, Jacob Carlborg
 wrote:
 On 2014-09-25 16:23, H. S. Teoh via Digitalmars-d wrote:

 That's the hallmark of make-based projects.
This was Ninja actually. But how would the build system know I've updated the compiler?
The compiler is an input to the build rule. Consider the rule: build: $(CC) my.c -o my.o what are the dependencies for the rule "build"? my.c obviously. Anything the compiler accesses during the compilation of my.c. And *the compiler itself*, referenced here as $(CC). From a dependency management standpoint, executables are not special except as running them leads to the discovery of more dependencies than may be statically specified.
Yah, it only gets odder from there. E.g. all stuff being build must also list the rule file itself as a dependency. FWIW I've seen implications that a better build tool would improve the lot of dlang contributors. My response to that would be: I agree that Phobos' posix.mak (https://github.com/D-Programming-Language/phobos/blob/master/posix.mak) is a tad baroque and uses a couple of the more obscure features of make. I also consider a few changes to it could have been effected simpler and better. However, as far as I can tell it's not a large issue for dlang contributors, and to the extent it is, it's not because of make's faults. Andrei
Sep 25 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 07:42:08PM +0200, Jacob Carlborg via Digitalmars-d
wrote:
 On 2014-09-25 16:23, H. S. Teoh via Digitalmars-d wrote:
 
That's the hallmark of make-based projects.
This was Ninja actually. But how would the build system know I've updated the compiler?
[...] The compiler and compile flags are inputs to the build rules in SCons. In my SCons projects, when I change compile flags (possibly for a subset of source files), it correctly figures out which subset (or the entire set) of files needs to be recompiled with the new flags. Make fails, and you end up with an inconsistent executable. In my SCons projects, when I upgrade the compiler, it recompiles everything with the new compiler. Make doesn't detect a difference, and if you make a change and recompile, suddenly you got an executable 80% compiled with the old compiler and 20% compiled with the new compiler. Most of the time it doesn't make a difference... but when it does, have fun figuring out where the problem lies. (Or just make clean; make yet again... the equivalent of which is basically what SCons would have done 5 hours ago.) In my SCons projects, when I upgrade the system C libraries, it recompiles everything that depends on the updated header files *and* library files. In make, it often fails to detect that the .so's have changed, so it fails to relink your program. Result: your executable behaves strangely at runtime due to wrong .so being linked, but the problem vanishes once you do a make clean; make. Basically, when you use make, you have to constantly do make clean; make just to be sure everything is consistent. In SCons, the build system does it for you -- and usually more efficiently than make clean; make because it knows exactly what needs recompilation and what doesn't. In my SCons projects, when I checkout a different version of the sources to examine some old code and then switch back to the latest workspace, SCons detects that file contents haven't changed since the last build, so it doesn't rebuild anything. In make, it thinks the entire workspace has changed, and you have to wait another half hour while it rebuilds everything. It doesn't even detect that all intermediate products like .o's are identical to the last time, so it will painstakingly relink everything again. SCons will detect when a .o file hasn't changed from the last build (e.g., you only changed a comment in the source file), and it won't bother relinking your binaries or trigger any of the downstream dependencies' rebuilds. Make-heads find the idea of the compiler being part of the input to a build rule "strange"; to me, it's common sense. What's strange is the fact that make doesn't (and can't) guarantee anything about the build -- you don't know for sure whether an incremental build gives you the same executable as a build from a clean workspace. You don't know if recompiling after checking out a previous release of your code will actually give you the same binaries that you shipped 2 months ago. You don't know if the latest build linked with the latest system libraries. You don't know if the linked libraries are consistent with each other. You basically have no assurance of *anything*, except if you make clean; make. But wait a minute, I thought the whole point of make is incremental building. If you have to make clean; make *every* *single* *time*, that completely defeats the purpose, and you might as well just put your compile commands in a bash script instead, and run that to rebuild everything from scratch every time. Yeah it's slow, but at least you know for sure your executables are always what you think they are! (And in practice, the bash script might actually be faster -- I lost track of how much time I wasted trying to debug something, only to go back and make clean; make and discover that the bug vanished. Had I used the shell script every single time, I could've saved how many hours spent tracking down these heisenbugs.) T -- Gone Chopin. Bach in a minuet.
Sep 25 2014
next sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Thursday, 25 September 2014 at 18:51:13 UTC, H. S. Teoh via
Digitalmars-d wrote:
 You don't know if
 recompiling after checking out a previous release of your code 
 will
 actually give you the same binaries that you shipped 2 months 
 ago.
To be clear, even if nothing changed, re-running the build may produce different output. This is actually a really hard problem - some build tools actually use entropy when producing their outputs, and as a result running the exact same tool with the same parameters in the same [apparent] environment will produce a subtly different output. This may be intended (address randomization) or semi-unintentional (generating a unique GUID inside a PDB so the debugger can validate the symbols match the binaries.) Virtually no build system in use can guarantee the above in all cases, so you end up making trade-offs - and if you don't really understand those tradeoffs, you won't trust your build system. What else may mess up the perfection of repeatability of your builds? Environment variables, the registry (on Windows), any source of entropy (the PRNG, the system clock/counters, any network access), etc. Build engineers themselves don't trust the build tooling because for as long as we have had the tooling, no one has invested enough into knowing what is trustworthy or how to make it that way. It's like always coding without a typesafe language, but which "gets the job done." Until you've spent some time in the typesafe environment, maybe you can't realize the benefit. You'll say "well, now I have to type a bunch more crap, and in most cases it wouldn't have helped me anyway" right up until you are sitting there at 3AM the night before shipping the product trying to track down why your Javascript program - I mean build process - isn't doing what you thought it did. Just because you CAN build a massive software system in Javascript doesn't mean the language is per-se good - it may just mean you are sufficient motivated to suffer through the pain. I'd rather make the whole experience *enjoyable* (hello TypeScript?) Different people will make different tradeoffs, and I am not here to tell Andrei or Walter that they *need* a new build system for D to get their work done - they don't right now. I'm more interested in figuring out how to provide a platform to realize the benefits for build like we have for our modern languages, and then leveraging that in new ways (like better sharing between the compiler, debugger, IDEs, test and packaging.)
Sep 25 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 07:19:14PM +0000, Cliff via Digitalmars-d wrote:
 On Thursday, 25 September 2014 at 18:51:13 UTC, H. S. Teoh via
 Digitalmars-d wrote:
You don't know if recompiling after checking out a previous release
of your code will actually give you the same binaries that you
shipped 2 months ago.
To be clear, even if nothing changed, re-running the build may produce different output. This is actually a really hard problem - some build tools actually use entropy when producing their outputs, and as a result running the exact same tool with the same parameters in the same [apparent] environment will produce a subtly different output. This may be intended (address randomization) or semi-unintentional (generating a unique GUID inside a PDB so the debugger can validate the symbols match the binaries.) Virtually no build system in use can guarantee the above in all cases, so you end up making trade-offs - and if you don't really understand those tradeoffs, you won't trust your build system.
Good point.
 What else may mess up the perfection of repeatability of your
 builds?  Environment variables, the registry (on Windows), any
 source of entropy (the PRNG, the system clock/counters, any
 network access), etc.
Well, obviously if your build process involves input from output, then it's impossible to have 100% repeatable builds. But at least we can do it (and arguably *should* do it) when there are no outside outputs. I actually have some experience in this area, because part of the website project I described in another post involves running gnuplot to plot statistics of a certain file by connecting to an SVN repository and parsing its history log. Since the SVN repo is not part of the website repo, obviously the build of a previous revision of the website will never be 100% repeatable -- it will always generate the plot for the latest history rather than what it would've looked like at the time of the previous revision. But for the most part, this doesn't matter. I *did* find that imagemagick was messing up my SCons scripts because it would always insert timestamped metadata into the generated image files, which caused SCons to always see the files as changed and trigger redundant rebuilds. This is also an example of where sometimes you do need to override the build system's default mechanisms to tell it to chill out and not rebuild that target every time. I believe SCons lets you do this, though I solved the problem another way -- by passing options to imagemagick to suppress said metadata. Nevertheless, I'd say that overall, builds should be reproducible by default, and the user should tell the build system when it doesn't have to be -- rather than the other way round. Just like D's motto of safety first, unsafe if you ask for it. [...]
 Different people will make different tradeoffs, and I am not here
 to tell Andrei or Walter that they *need* a new build system for
 D to get their work done - they don't right now.  I'm more
 interested in figuring out how to provide a platform to realize
 the benefits for build like we have for our modern languages, and
 then leveraging that in new ways (like better sharing between the
 compiler, debugger, IDEs, test and packaging.)
Agreed. I'm not saying we *must* replace the makefiles in dmd / druntime / phobos... I'm speaking more categorically, that build systems in general have advanced beyond the days of make, and it's high time people started learning about them. While you *could* write an entire application in assembly language (I did), times have moved on, and we now have far more suitable tools for the job. T -- "Computer Science is no more about computers than astronomy is about telescopes." -- E.W. Dijkstra
Sep 25 2014
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-09-25 20:49, H. S. Teoh via Digitalmars-d wrote:

 The compiler and compile flags are inputs to the build rules in SCons.

 In my SCons projects, when I change compile flags (possibly for a subset
 of source files), it correctly figures out which subset (or the entire
 set) of files needs to be recompiled with the new flags. Make fails, and
 you end up with an inconsistent executable.

 In my SCons projects, when I upgrade the compiler, it recompiles
 everything with the new compiler. Make doesn't detect a difference, and
 if you make a change and recompile, suddenly you got an executable 80%
 compiled with the old compiler and 20% compiled with the new compiler.
 Most of the time it doesn't make a difference... but when it does, have
 fun figuring out where the problem lies. (Or just make clean; make yet
 again... the equivalent of which is basically what SCons would have done
 5 hours ago.)

 In my SCons projects, when I upgrade the system C libraries, it
 recompiles everything that depends on the updated header files *and*
 library files. In make, it often fails to detect that the .so's have
 changed, so it fails to relink your program. Result: your executable
 behaves strangely at runtime due to wrong .so being linked, but the
 problem vanishes once you do a make clean; make.
I see, thanks for the explanation. -- /Jacob Carlborg
Sep 25 2014
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 09/25/2014 02:49 PM, H. S. Teoh via Digitalmars-d wrote:
 Make-heads find the idea of the compiler being part of the input to a
 build rule "strange"; to me, it's common sense.
Yes. This is exactly why (unless it's been reverted or regressed? I only mention that because I haven't looked lately) RDMD counts the compiler itself as a dependency. Because: $ dvm use 2.065.0 $ rdmd stuff.d [compiles] $ dvm use 2.066.0 $ rdmd stuff.d [silently *doesn't* recompile?!? Why is *that* useful?] Is *not* remotely useful behavior, basically makes no sense at all, *and* gives the programmer bad information. ("Oh, it compiles fine on this new version? Great! I'm done here! Wait, why are other people reporting compile errors on the new compiler? It worked for me.")
Oct 09 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 23:14:30 -0700
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 And I use make -j on posix for parallel builds, it works fine on dmd.
me too. paired with `git clean -dxf` to get "clean of the cleanest possible" fileset. it's good that dmd build times are so low. ;-)
Sep 24 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 21:44:26 -0700
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Make is the C++ of build systems. It may be ugly, but you can get it
 to work.
'make' reminds me assembler language: it's possible to do alot of cool things with 'make', but it's compilcated and tedious process. ah, and it's not the fastest kid on the block, so you'll not gain any speed using 'make'. ;-)
Sep 24 2014
prev sibling parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
 Actually you can't do this for D properly without enlisting the 
 help of the compiler. Scoped import is a very interesting 
 conditional dependency (it is realized only if the template is 
 instantiated).

 Also, lazy opening of imports is almost guaranteed to have a 
 huge good impact on build times.

 Your reply confirms my worst fear: you're looking at yet 
 another general build system, of which there are plenty of 
 carcasses rotting in the drought left and right of highway 101.
This is one of my biggest frustrations with existing "build systems" - which really are nothing more than glorified "make"s with some extra syntax and - for the really advanced ones - ways to help you correctly specify your makefiles by flagging errors or missing dependencies.
 The build system that will be successful for D will cooperate 
 with the compiler, which will give it fine-grained dependency 
 information. Haskell does the same with good results.


 Andrei
The compiler has a ton of precise information useful for build tools, IDEs and other kinds of analysis tools (to this day, it still bugs the crap out of me that Visual Studio has effectively *two* compilers, one for intellisense and one for the command-line and they do not share the same build environment or share the work they do!) Build is more than just producing a binary - it incorporates validation through testing, packaging for distribution, deployment and even versioning. I'd like to unlock the data in our tools and find ways to leverage it to improve automation and the whole developer workflow. Those ideas and principles go beyond D and the compiler of course, but we do have a nice opportunity here because we can work closely with the compiler authors, rather than having to rely *entirely* on OS-level process introspection through e.g. detours (which is still valuable from a pure dependency discovery process of course.) If we came out of this project with "tup-for-D" I'd consider that an abject failure.
Sep 24 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/24/14, 10:14 PM, Cliff wrote:
 This is one of my biggest frustrations with existing "build systems" -
 which really are nothing more than glorified "make"s with some extra
 syntax and - for the really advanced ones - ways to help you correctly
 specify your makefiles by flagging errors or missing dependencies.
It's nice you two are enthusiastic about improving that space. Also, it's a good example of how open source development works. I can't tell you what to do, you guys get to work on whatever strikes your fancy. Have fun! -- Andrei
Sep 24 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 21:15:59 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 needs to write should in theory be simply:
=20
 	Program("mySuperApp", "src/main.d");
=20
 and everything else will be automatically figured out.
ah, that's exactly why i migrated to jam! i got bored of manual dependency control, and jam file scanning works reasonably well (for c and c++, i'm yet to finish D scanner -- it should understand what package.d is). now i'm just writing something like Main myproggy : main.d module0.d module1.d ; and that's all. or even Main myproggy : [ Glob . : *.d : names-only ] ; and for projects which contains some subdirs, libs and so on it's still easy. i never tried jam on really huge projects, but i can't see why it shouldn't be good, as it supports subdirs without recursion and so on. jam is still heavy file-based and using only timestamps, but it's only 'cause i'm still not motivaded enough (read: timestamps works for me). p.s. i'm talking about my own fork of 'jam' here. it's slightly advanced over original jam. p.p.s. i patched gdc to emit all libraries mentioned in `pragma(lib, ...)` and my jam understands how to extract and use this information.
Sep 24 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 25/09/14 03:54, H. S. Teoh via Digitalmars-d wrote:

 Well, Cliff & I (and whoever's interested) will see what we can do about
 that. Perhaps in the not-so-distant future we may have a D build tool
 that can serve as the go-to build tool for D projects.
What problems do you see with Dub? -- /Jacob Carlborg
Sep 25 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 25 Sep 2014 09:04:43 +0200
Jacob Carlborg via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 What problems do you see with Dub?
i, for myself, see a great problem with Dub: it's not a universal build tool. some of my internal D projects, for example, builds C libraries from source (and some of that libraries using yacc, for example, and others some other generator tools, which are build and execed in the process of building the library), then runs some kind of external generators (can't do this with D CTFE, 'cause compiler eats all of my RAM and then fails), then builds D sources, then links it all together. it's complicated as it is, and Dub is not well-suited for such use cases.
Sep 25 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 25 September 2014 at 07:04:43 UTC, Jacob Carlborg 
wrote:
 On 25/09/14 03:54, H. S. Teoh via Digitalmars-d wrote:

 Well, Cliff & I (and whoever's interested) will see what we 
 can do about
 that. Perhaps in the not-so-distant future we may have a D 
 build tool
 that can serve as the go-to build tool for D projects.
What problems do you see with Dub?
Here's one: having to manually generate the custom main file for unittest builds. There's no current way (or at least there wasn't when I brought it up in the dub forum) to tell it to autogenerate a D file from a dub package and list it as a dependency of the unittest build. This is trivial in CMake. Instead I have to remember to run my dtest program every time I create a new file with unit tests when it should be done for me. Atila
Sep 25 2014
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file for
 unittest builds. There's no current way (or at least there wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D file from
 a dub package and list it as a dependency of the unittest build.
Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main" flag that adds an empty main function. -- /Jacob Carlborg
Sep 25 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg 
wrote:
 On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file 
 for
 unittest builds. There's no current way (or at least there 
 wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D 
 file from
 a dub package and list it as a dependency of the unittest 
 build.
Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main" flag that adds an empty main function.
Andrei spoke about an idiom that they constantly use at Facebok, because there aparrently nobody runs *main and unittests*. So they keep a special empty main for the -unittest version. If tha time to clean up D has come, maybe it is the time to make that idiom the default behaviour of the flag. He posted that thing somewhere in the forum, but cannot find it right now. Real usage is good for learning.
Sep 25 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Thursday, 25 September 2014 at 15:58:11 UTC, eles wrote:
 On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg 
 wrote:
 On 25/09/14 09:38, Atila Neves wrote:
 Andrei spoke about an idiom that they constantly use at 
 Facebok, because there aparrently nobody runs *main and 
 unittests*. So they keep a special empty main for the -unittest 
 version.
This idiom here: http://forum.dlang.org/post/ljr5n7$1leb$1 digitalmars.com "Last but not least, virtually nobody I know runs unittests and then main. This is quickly becoming an idiom: version(unittest) void main() {} else void main() { ... } I think it's time to change that. We could do it the non-backward-compatible way by redefining -unittest to instruct the compiler to not run main. Or we could define another flag such as -unittest-only and then deprecate the existing one. "
Sep 25 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 08:40:50PM +0000, eles via Digitalmars-d wrote:
 On Thursday, 25 September 2014 at 15:58:11 UTC, eles wrote:
On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg wrote:
On 25/09/14 09:38, Atila Neves wrote:
Andrei spoke about an idiom that they constantly use at Facebok,
because there aparrently nobody runs *main and unittests*. So they
keep a special empty main for the -unittest version.
This idiom here: http://forum.dlang.org/post/ljr5n7$1leb$1 digitalmars.com "Last but not least, virtually nobody I know runs unittests and then main. This is quickly becoming an idiom: version(unittest) void main() {} else void main() { ... } I think it's time to change that. We could do it the non-backward-compatible way by redefining -unittest to instruct the compiler to not run main. Or we could define another flag such as -unittest-only and then deprecate the existing one.
Please don't deprecate the current -unittest. I regularly find it very useful when in the code-compile-test cycle. I'm OK with introducing -unittest-only for people who want it, but I still like using the current -unittest. T -- Making non-nullable pointers is just plugging one hole in a cheese grater. -- Walter Bright
Sep 25 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg 
wrote:
 On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file 
 for
 unittest builds. There's no current way (or at least there 
 wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D 
 file from
 a dub package and list it as a dependency of the unittest 
 build.
Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main" flag that adds an empty main function.
I don't want an empty main function. I want the main function and the file it's in to be generated by the build system. Atila
Sep 25 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 10:26 AM, Atila Neves wrote:
 On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg wrote:
 On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file for
 unittest builds. There's no current way (or at least there wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D file from
 a dub package and list it as a dependency of the unittest build.
Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main" flag that adds an empty main function.
I don't want an empty main function. I want the main function and the file it's in to be generated by the build system.
Why would be the focus on the mechanism instead of the needed outcome? -- Andrei
Sep 25 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 10:45 AM, Andrei Alexandrescu wrote:
 On 9/25/14, 10:26 AM, Atila Neves wrote:
 On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg wrote:
 On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file for
 unittest builds. There's no current way (or at least there wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D file from
 a dub package and list it as a dependency of the unittest build.
Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main" flag that adds an empty main function.
I don't want an empty main function. I want the main function and the file it's in to be generated by the build system.
Why would be the focus on the mechanism instead of the needed outcome? -- Andrei
I've found: -main -unittest -cov to be terribly convenient when developing modules. Should have added -main much sooner.
Sep 25 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 12:51:45PM -0700, Walter Bright via Digitalmars-d wrote:
[...]
 I've found:
 
    -main -unittest -cov
 
 to be terribly convenient when developing modules. Should have added
 -main much sooner.
Yeah, I do that all the time nowadays when locally testing Phobos fixes. In the past I'd have to write yet another empty main() in yet another temporary source file, just to be able to avoid having to wait for the entire Phobos testsuite to run after a 1-line code change. T -- "How are you doing?" "Doing what?"
Sep 25 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 25 September 2014 at 17:45:56 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 10:26 AM, Atila Neves wrote:
 On Thursday, 25 September 2014 at 13:50:10 UTC, Jacob Carlborg 
 wrote:
 On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file 
 for
 unittest builds. There's no current way (or at least there 
 wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a 
 D file from
 a dub package and list it as a dependency of the unittest 
 build.
Hmm, I haven't used Dub to run unit tests. Although, DMD has a "-main" flag that adds an empty main function.
I don't want an empty main function. I want the main function and the file it's in to be generated by the build system.
Why would be the focus on the mechanism instead of the needed outcome? -- Andrei
Because I don't use unittest blocks, I use my own library. The one thing it can't use the compiler for is discover what files are in a directory, so I need to generate the main function that calls into unit-threaded with a list of compile-time strings. What I need/want is that every time I add a new source file to the project, it gets auto-generated automatically and is a dependency of the unittest build in dub. Atila
Sep 25 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-09-26 08:51, Atila Neves wrote:

 Because I don't use unittest blocks, I use my own library. The one thing
 it can't use the compiler for is discover what files are in a directory,
 so I need to generate the main function that calls into unit-threaded
 with a list of compile-time strings. What I need/want is that every time
 I add a new source file to the project, it gets auto-generated
 automatically and is a dependency of the unittest build in dub.
Yeah, this sucks. That's way this is needed: https://github.com/D-Programming-Language/dmd/pull/2271 -- /Jacob Carlborg
Sep 26 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-09-25 19:26, Atila Neves wrote:

 I don't want an empty main function. I want the main function and the
 file it's in to be generated by the build system.
What do you want the main function to contain? -- /Jacob Carlborg
Sep 25 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file for
 unittest builds. There's no current way (or at least there wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D file from
 a dub package and list it as a dependency of the unittest build.
BTW, I would say that's a minor issue, far from begin enough to create a completely new build system. -- /Jacob Carlborg
Sep 25 2014
parent "Atila Neves" <atila.neves gmail.com> writes:
On Thursday, 25 September 2014 at 13:51:17 UTC, Jacob Carlborg 
wrote:
 On 25/09/14 09:38, Atila Neves wrote:

 Here's one: having to manually generate the custom main file 
 for
 unittest builds. There's no current way (or at least there 
 wasn't when I
 brought it up in the dub forum) to tell it to autogenerate a D 
 file from
 a dub package and list it as a dependency of the unittest 
 build.
BTW, I would say that's a minor issue, far from begin enough to create a completely new build system.
I agree. That's why I haven't written a build system yet. However, larger projects need this kind of thing. If I were working on a team of 20+ devs writing D, I'd eventually need it. I'd consider SCons, CMake, etc. but I'd still need dub for package management. Atila
Sep 25 2014
prev sibling parent Shammah Chancellor <email domain.com> writes:
On 2014-09-25 01:54:26 +0000, H. S. Teoh via Digitalmars-d said:

 On Wed, Sep 24, 2014 at 05:37:37PM -0700, Andrei Alexandrescu via 
 Digitalmars-d wrote:
 On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:
 You're misrepresenting my position.*In spite of their current flaws*,
 modern build systems like SCons and Tup already far exceed make in
 their basic capabilities and reliability.
Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: "Be the change you want to see in dlang's build system" :o). -- Andrei
Well, Cliff & I (and whoever's interested) will see what we can do about that. Perhaps in the not-so-distant future we may have a D build tool that can serve as the go-to build tool for D projects. T
Please submit PRs for dub, instead of creating a new project. Dub is a nice way of managing library packages already. I'd rather not use two different tools.
Oct 04 2014
prev sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 10:18:29PM +0000, Atila Neves via 
 Digitalmars-d wrote:
 [...]
 Now throw in things like built-in parallelization ala SCons 
 (I'm not
 sure if tup does that too, I suspect it does), 
 100%-reproducible builds,
 auto-packaging, etc., and we might have a contender for Andrei's
 "winner" build system.
Tup does parallels builds: in the company we were using scons, now cmake, tup and, well, dub. Plus an internal one... *sigh*... --- /Paolo
Sep 25 2014
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
25-Sep-2014 01:12, Andrei Alexandrescu пишет:
 On 9/24/14, 1:10 PM, H. S. Teoh via Digitalmars-d wrote:
 That's unfortunate indeed. I wish I could inspire them as to how cool a
 properly-done build system can be.
[snip] That's all nice. However: (1) the truth is there's no clear modern build tool that has "won" over make;
Pretty much any of them is better, and will easily handle the size of phobos/druntime/dmd projects.
 oh there's plenty of them, but each has
 its own quirks that makes it tenuous to use;
Somehow must indicate that make doesn't have them (and in incredible capacity)? Most of my Scons scripts are under 10 lines, in fact 2-3 LOCs. I had an "insanely" complicated one could handle 2 platforms, 3 emulators and a few custom build steps (including signing binaries), cleaning, tracking dependencies with minimal rebuilds. For instance, binary need not to be signed again if it's bit for bit the same. All of the above was done in about 50 lines, never braking as the project progressed, I think it changed only 2-3 times over a year. I was never able to make a good makefile that wouldn't shit on me in some way, like not properly cleaning something or handling errors . I'm not even talking about the 10+ LOCs for anything barely useful.
 (2) any build system for a
 project of nontrivial size needs a person/team minding it - never saw
 such a thing as it's just all automated and it works;
The question is amount of work, size of file and frequency of changes. For instance, Scons easily allows not to change a makefile on every single added module, I can't say if make could pull it off.
 (3) especially if
 the build system is not that familiar, the role of the build czar is all
 the more important.
Somebody (Russel?) was working on D Scons support, how about start to capitalize on this work? I'd gladly try to make a scons script for phobos/druntime if there is even a tiny chance of apporval.
 So the reality is quite a bit more complicated than the shiny city on a
 hill you describe.
Most people using make/autotools say build systems are hard, it need not to be true but I'm obviously biased. -- Dmitry Olshansky
Sep 25 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 09:18:26PM +0400, Dmitry Olshansky via Digitalmars-d
wrote:
[...]
 I had an "insanely" complicated one could handle 2 platforms, 3
 emulators and a few custom build steps (including signing binaries),
 cleaning, tracking dependencies with minimal rebuilds. For instance,
 binary need not to be signed again if it's bit for bit the same.
 All of the above was done in about 50 lines, never braking as the
 project progressed, I think it changed only 2-3 times over a year.

 I was never able to make a good makefile that wouldn't shit on me in
 some way, like not properly cleaning something or handling errors .
 I'm not even talking about the 10+ LOCs for anything barely useful.
+1. Jives with my experience with SCons as well.
 25-Sep-2014 01:12, Andrei Alexandrescu пишет:
(2) any build system for a project of nontrivial size needs a
person/team minding it - never saw such a thing as it's just all
automated and it works;
The question is amount of work, size of file and frequency of changes. For instance, Scons easily allows not to change a makefile on every single added module, I can't say if make could pull it off.
It can. Except that you can expect things will go horribly wrong if you so much as breathe in the wrong direction. And when you're dealing with a multi-person project (i.e., 99% of non-trivial software projects), expect that people will do stupid things that either breaks the makefile, or they will make breaking changes to the makefile that will screw everything up for everyone else.
(3) especially if the build system is not that familiar, the role of
the build czar is all the more important.
Somebody (Russel?) was working on D Scons support, how about start to capitalize on this work? I'd gladly try to make a scons script for phobos/druntime if there is even a tiny chance of apporval.
D support has been merged into mainline SCons, thanks to Russel's efforts. I've been using his D module for a while now, and find it generally very C/C++-like, which is perfect for projects that need heavy interfacing between C/C++ and D. For D-only projects, it's a bit klunky because it defaults to separate compilation, whereas dmd tends to work better with whole-program compilation (or at least per-module rather than per-source-file compilation). SCons does hit performance barriers when you get into very large projects (on the magnitude of Mozilla Firefox, or the GTK desktop suite, for example), but for something the size of dmd/druntime/phobos, you won't notice the difference at all. In fact, you'd save a lot of time by not needing to make clean; make every single time you do anything non-trivial.
So the reality is quite a bit more complicated than the shiny city on
a hill you describe.
Most people using make/autotools say build systems are hard, it need not to be true but I'm obviously biased.
[...] I'm biased too, but I'm inclined to think that those people say it's hard because it *is* hard when the best you got is make and autotools. :-P T -- Leather is waterproof. Ever see a cow with an umbrella?
Sep 25 2014
prev sibling parent "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 24 September 2014 at 20:12:40 UTC, H. S. Teoh via
Digitalmars-d wrote:
 On Wed, Sep 24, 2014 at 07:36:05PM +0000, Cliff via 
 Digitalmars-d wrote:
 On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
 wrote:
On 2014-09-24 12:16, Walter Bright wrote:

I've never heard of a non-trivial project that didn't have 
constant
breakage of its build system. All kinds of reasons - add a 
file,
forget to add it to the manifest. Change the file contents, 
neglect
to update dependencies. Add new dependencies on some script, 
script
fails to run on one configuration. And on and on.
Again, if changing the file contents breaks the build system you're doing it very, very wrong.
People do it very, very wrong all the time - that's the problem :) Build systems are felt by most developers to be a tax they have to pay to do what they want to do, which is write code and solve non-build-related problems.
That's unfortunate indeed. I wish I could inspire them as to how cool a properly-done build system can be. Automatic parallel building, for example. Fully-reproducible, incremental builds (never ever do `make clean` again). Automatic build + packaging in a single command. Incrementally *updating* packaging in a single command. Automatic dependency discovery. And lots more. A lot of this technology actually already exists. The problem is that still too many people think "make" whenever they hear "build system". Make is but a poor, antiquated caricature of what modern build systems can do. Worse is that most people are resistant to replacing make because of inertia. (Not realizing that by not throwing out make, they're subjecting themselves to a lifetime of unending, unnecessary suffering.)
 Unfortunately, build engineering is effectively a specialty of 
 its own
 when you step outside the most trivial of systems.  It's 
 really no
 surprise how few people can get it right - most people can't 
 even
 agree on what a build system is supposed to do...
It's that bad, huh? At its most fundamental level, a build system is really nothing but a dependency management system. You have a directed, acyclic graph of objects that are built from other objects, and a command which takes said other objects as input, and produces the target object(s) as output. The build system takes as input this dependency graph, and runs the associated commands in topological order to produce the product(s). A modern build system can parallelize independent steps automatically. None of this is specific to compiling programs, in fact, it works for any process that takes a set of inputs and incrementally derives intermediate products until the final set of products are produced. Although the input is the (entire) dependency graph, it's not desirable to specify this graph explicitly (it's far too big in non-trivial projects); so most build systems offer ways of automatically deducing dependencies. Usually this is done by scanning the inputs, and modern build systems would offer ways for the user to define new scanning methods for new input types. One particularly clever system, Tup (http://gittup.org/tup/), uses OS call proxying to discover the *exact* set of inputs and outputs for a given command, including hidden dependencies (like reading a compiler configuration file that may change compiler behaviour) that most people don't even know about. It's also not desirable to have to derive all products from its original inputs all the time; what hasn't changed shouldn't need to be re-processed (we want incremental builds). So modern build systems implement some way of detecting when a node in the dependency graph has changed, thereby requiring all derived products downstream to be rebuilt. The most unreliable method is to scan for file change timestamps (make). A reliable (but slow) method is to compare file hash checksums. Tup uses OS filesystem change notifications to detect changes, thereby cutting out the scanning overhead, which can be quite large in complex projects (but it may be unreliable if the monitoring daemon isn't running / after rebooting). These are all just icing on the cake; the fundamental core of a build system is basically dependency graph management. T
Couldn't have said it better myself. Atila
Sep 24 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 13:10:46 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 The problem is that still too many people think "make"
 whenever they hear "build system".  Make is but a poor, antiquated
 caricature of what modern build systems can do. Worse is that most
 people are resistant to replacing make because of inertia. (Not
 realizing that by not throwing out make, they're subjecting themselves
 to a lifetime of unending, unnecessary suffering.)
+many. 'make' is a PITA even for small personal projects. i myself using heavily modified jam as build system. it's not shiny good too, but it is at least usable without much suffering and small enough to be managed by one man.
Sep 24 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 12:26 PM, Jacob Carlborg wrote:
 On 2014-09-24 12:16, Walter Bright wrote:

 I've never heard of a non-trivial project that didn't have constant
 breakage of its build system. All kinds of reasons - add a file, forget
 to add it to the manifest. Change the file contents, neglect to update
 dependencies. Add new dependencies on some script, script fails to run
 on one configuration. And on and on.
Again, if changing the file contents breaks the build system you're doing it very, very wrong.
What matters is if the autotester imports X's large project, then it must somehow import whatever X's build system is, for better or worse. And I guarantee it will break constantly. Analogously, you and I know how to write code correctly (as does everyone else). But I guarantee that you and I are mistaken about that, and we don't write perfect code, and it breaks. Just like the build systems.
Sep 24 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-24 08:57, Walter Bright wrote:

 Heck, the dmd release package build scripts break every single release
 cycle.
The it's obviously doing something wrong. -- /Jacob Carlborg
Sep 24 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 11:42 AM, Jacob Carlborg wrote:
 On 2014-09-24 08:57, Walter Bright wrote:

 Heck, the dmd release package build scripts break every single release
 cycle.
The it's obviously doing something wrong.
See my reply to Vladimir.
Sep 24 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/21/2014 3:16 PM, H. S. Teoh via Digitalmars-d wrote:
 On Sun, Sep 21, 2014 at 08:49:38AM +0000, via Digitalmars-d wrote:
 On Sunday, 21 September 2014 at 00:07:36 UTC, Vladimir Panteleev wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
The regressions! https://issues.dlang.org/buglist.cgi?bug_severity=regression&list_id=106988&resolution=--- I filed over half of those...
I guess you found them using your own code base? Maybe it would make sense to add one or more larger projects to the autotester, in addition to the unit tests. They don't necessarily need to be blocking, just a notice "hey, your PR broke this and that project" would surely be helpful to detect the breakages early on.
This has been suggested before. The problem is resources. If you're willing to donate equipment for running these tests, it would be greatly appreciated, I believe.
No, that's not the problem. The problem is what to do when the "larger project" fails. Currently, it is the submitter's job to adjust the test suite, fix phobos code, whatever is necessary to get the suite running again. Sometimes, in the more convoluted Phobos code, this can be a real challenge. Now replace that with somewhere in a large project, which our poor submitter knows absolutely nothing about, it fails. You're asking him to go in, understand this large project, determine if it's a problem with his submission or a problem with the large project, and fix it. At some level, then WE become the maintainers of that large project. This is completely unworkable.
Sep 23 2014
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wednesday, 24 September 2014 at 03:59:10 UTC, Walter Bright 
wrote:
 This is completely unworkable.
Mister, please stop hurting the pool straw man. Let me quote the relevant part:
 They don't necessarily need to be blocking, just a notice 
 "hey, your PR broke this and that project" would surely be 
 helpful to detect the breakages early on.
I think that aside from the technical limitations, that's completely reasonable, and does not put any undue obligation on anyone.
Sep 23 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 9:08 PM, Vladimir Panteleev wrote:
 On Wednesday, 24 September 2014 at 03:59:10 UTC, Walter Bright wrote:
 This is completely unworkable.
Mister, please stop hurting the pool straw man. Let me quote the relevant part:
 They don't necessarily need to be blocking, just a notice "hey, your PR
 broke this and that project" would surely be helpful to detect the breakages
 early on.
I think that aside from the technical limitations, that's completely reasonable, and does not put any undue obligation on anyone.
Who is going to maintain the autotester version of these projects? What I'd like to see is the autotester regularly build release packages out of HEAD. Then, large project maintainers can create their own scripts to download the latest compiler and attempt to build their project.
Sep 23 2014
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wednesday, 24 September 2014 at 04:36:33 UTC, Walter Bright 
wrote:
 On 9/23/2014 9:08 PM, Vladimir Panteleev wrote:
 On Wednesday, 24 September 2014 at 03:59:10 UTC, Walter Bright 
 wrote:
 This is completely unworkable.
Mister, please stop hurting the pool straw man. Let me quote the relevant part:
 They don't necessarily need to be blocking, just a notice 
 "hey, your PR
 broke this and that project" would surely be helpful to 
 detect the breakages
 early on.
I think that aside from the technical limitations, that's completely reasonable, and does not put any undue obligation on anyone.
Who is going to maintain the autotester version of these projects? What I'd like to see is the autotester regularly build release packages out of HEAD. Then, large project maintainers can create their own scripts to download the latest compiler and attempt to build their project.
We've been talking about this since last year's DConf. I don't know if it's ever going to happen, but now there is Digger, which solves most of the same problem. Nevertheless, this is not enough. It must be automatic - it must verify the state of things daily, without human intervention. It's unreasonable (borderline absurd, even) to expect that every large project maintainer to manually verify if their project still works every day. It doesn't need to run on the same hardware as the autotester. It can run on the project maintainers' servers or home computers. But it must be easy to set up, and it should notify both the project owners and the pull request authors.
Sep 24 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 3:00 AM, Vladimir Panteleev wrote:
 Nevertheless, this is not enough. It must be automatic - it must verify the
 state of things daily, without human intervention. It's unreasonable
(borderline
 absurd, even) to expect that every large project maintainer to manually verify
 if their project still works every day.

 It doesn't need to run on the same hardware as the autotester. It can run on
the
 project maintainers' servers or home computers. But it must be easy to set up,
 and it should notify both the project owners and the pull request authors.
If it is something opted in or out by the project maintainers, and is not part of the autotester, and runs on the project maintainer's system, I'm for it.
Sep 24 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 24/09/14 05:59, Walter Bright wrote:

 No, that's not the problem. The problem is what to do when the "larger
 project" fails.

 Currently, it is the submitter's job to adjust the test suite, fix
 phobos code, whatever is necessary to get the suite running again.
 Sometimes, in the more convoluted Phobos code, this can be a real
 challenge.

 Now replace that with somewhere in a large project, which our poor
 submitter knows absolutely nothing about, it fails. You're asking him to
 go in, understand this large project, determine if it's a problem with
 his submission or a problem with the large project, and fix it.
If it worked before and now it doesn't, then it sounds like a regression to me.
 At some level, then WE become the maintainers of that large project.

 This is completely unworkable.
The author of the library could at least get a notification. -- /Jacob Carlborg
Sep 23 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 11:27 PM, Jacob Carlborg wrote:
 If it worked before and now it doesn't, then it sounds like a regression to me.
It could be an "accepts invalid" bug was fixed. It could be that we wanted to make a breaking change. It could be that it never actually worked, it just silently failed. I can say from experience that when you're presented with an unfamiliar tangle of template code (why do people write that stuff :-) ) it ain't at all easy discerning what kind of issue it actually is.
Sep 24 2014
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 There was a recent video[1] by Jonathan Blow about what he 
 would want in a programming language designed specifically for 
 game development. Go, Rust, and D were mentioned and his reason 
 for not wanting to use D is is that it is "too much like C++" 
 although he does not really go into it much and it was a very 
 small part of the video it still brings up some questions.

 What I am curious is what are the worst parts of D? What sort 
 of things would be done differently if we could start over or 
 if we were designing a D3? I am not asking to try and bash D 
 but because it is helpful to know what's bad as well as good.

 I will start off...
 GC by default is a big sore point that everyone brings up
 "is" expressions are pretty wonky
 Libraries could definitely be split up better

 What do you think are the worst parts of D?

 [1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
1. Accidental complexity. 2. Introducing hacks to solve issues instead of going for a clean solution, because it reduce complexity in the short run (and create 1. on the long run).
Sep 20 2014
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
Proper D code is supposed to have lots of attributes (pure const nothrow nogc) that brings little and makes it look bad.
Sep 21 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/21/14, 1:27 AM, ponce wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
Proper D code is supposed to have lots of attributes (pure const nothrow nogc) that brings little and makes it look bad.
No because deduction. -- Andrei
Sep 21 2014
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 21 September 2014 15:54, Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 9/21/14, 1:27 AM, ponce wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
Proper D code is supposed to have lots of attributes (pure const nothrow nogc) that brings little and makes it look bad.
No because deduction. -- Andrei
Agreed. The time when you want to explicitly use these attributes is if you want to enforce nogc, pure ... As it turns out, it is a good idea to enforce these from the start, rather than after you've written your program. Iain
Sep 22 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Sep 21, 2014 at 08:27:56AM +0000, ponce via Digitalmars-d wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
What do you think are the worst parts of D?
Proper D code is supposed to have lots of attributes (pure const nothrow nogc) that brings little and makes it look bad.
To be fair, though, hindsight is always 20/20. Had we known earlier that we would have these attributes, they would've been default to begin with, and you'd have to explicitly ask for impure / mutable / throwing / withgc. But on the positive side, the compiler will automatically infer attributes for template functions, and lately I've been tempted to write templated functions by default just to get the attribute inference bonus, even if it's just translating fun(a,b,c) to fun()(a,b,c). The caller site never has to change, and the function is never instantiated more than once -- and you get the added bonus that if the function is never actually called, then it doesn't even appear in the executable. Attribute inference is the way to go, IMO. Research has shown that people generally don't bother with writing properly-attributed declarations -- it's too tedious and easily overlooked. Having "nicer" attributes be the default helps somewhat, but attribute inference is ultimately what might actually stand a chance of solving this problem. T -- "Holy war is an oxymoron." -- Lazarus Long
Sep 21 2014
parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Sunday, 21 September 2014 at 22:41:04 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 with, and you'd have to explicitly ask for impure / mutable
"Impure" should be on parameters so you can do dataflow in the presence of FFI. So you basically need better mechanisms. But then you have to analyze the needs first (e.g. the desirable semantics that currently are avoided) All of then. Adding one hack after the other is not the best approach.
Sep 21 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Sep 21, 2014 at 10:57:34PM +0000, Ola Fosheim Grostad via Digitalmars-d
wrote:
 On Sunday, 21 September 2014 at 22:41:04 UTC, H. S. Teoh via Digitalmars-d
 wrote:
with, and you'd have to explicitly ask for impure / mutable
"Impure" should be on parameters so you can do dataflow in the presence of FFI. So you basically need better mechanisms. But then you have to analyze the needs first (e.g. the desirable semantics that currently are avoided) All of then. Adding one hack after the other is not the best approach.
[...] Eventually the real solution is automatic inference, with the user using explicit attributes at the top-level where they are desired, and the compiler will take care of the rest. T -- Creativity is not an excuse for sloppiness.
Sep 21 2014
parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Sunday, 21 September 2014 at 23:04:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Eventually the real solution is automatic inference, with the 
 user
 using explicit attributes at the top-level where they are 
 desired, and
 the compiler will take care of the rest.
Yes, but you need a model of the semantics you are looking for.
Sep 21 2014
prev sibling next sibling parent "Kagamin" <spam here.lot> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 [1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
The worst part is programmers unable to express their ideas in written form.
Sep 21 2014
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 There was a recent video[1] by Jonathan Blow about what he 
 would want in a programming language designed specifically for 
 game development. Go, Rust, and D were mentioned and his reason 
 for not wanting to use D is is that it is "too much like C++" 
 although he does not really go into it much and it was a very 
 small part of the video it still brings up some questions.

 What I am curious is what are the worst parts of D? What sort 
 of things would be done differently if we could start over or 
 if we were designing a D3? I am not asking to try and bash D 
 but because it is helpful to know what's bad as well as good.

 I will start off...
 GC by default is a big sore point that everyone brings up
 "is" expressions are pretty wonky
 Libraries could definitely be split up better

 What do you think are the worst parts of D?

 [1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
There are so many it is hard to chose worst offenders.
Sep 21 2014
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 20 September 2014 22:39, Tofu Ninja via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 There was a recent video[1] by Jonathan Blow about what he would want in a
 programming language designed specifically for game development. Go, Rust,
 and D were mentioned and his reason for not wanting to use D is is that it
 is "too much like C++" although he does not really go into it much and it
 was a very small part of the video it still brings up some questions.

 What I am curious is what are the worst parts of D? What sort of things
 would be done differently if we could start over or if we were designing a
 D3? I am not asking to try and bash D but because it is helpful to know
 what's bad as well as good.

 I will start off...
 GC by default is a big sore point that everyone brings up
 "is" expressions are pretty wonky
 Libraries could definitely be split up better

 What do you think are the worst parts of D?

 [1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
Personally, after years of use, my focus on things that really annoy me has shifted away from problems with the language, and firmly towards basic practicality and productivity concerns. I'm for addressing things that bother the hell out of me every single day. I should by all reason be more productive in D, but after 6 years of experience, I find I definitely remain less productive, thanks mostly to tooling and infrastructure. 1. Constant rejection of improvements because "OMG breaking change!". Meanwhile, D has been breaking my code on practically every release for years. I don't get this, reject changes that are deliberately breaking changes which would make significant improvements, but allow breaking changes anyway because they are bug fixes? If the release breaks code, then accept that fact and make some real proper breaking changes that make D substantially better! It is my opinion that D adopters don't adopt D because it's perfect just how it is and they don't want it to improve with time, they adopt D *because they want it to improve with time*! That implies an acceptance (even a welcoming) of breaking changes. 2. Tooling is still insufficient. I use Visual Studio, and while VisualD is good, it's not great. Like almost all tooling projects, there is only one contributor, and I think this trend presents huge friction to adoption. Tooling is always factored outside of the D community and their perceived realm of responsibility. I'd like to see tooling taken into the core community and issues/bugs treated just as seriously as issues in the compiler/language itself. 3. Debugging is barely ever considered important. I'd love to see a concerted focus on making the debug experience excellent. Iain had a go at GDB, I understand there is great improvement there. Sadly, we recently lost the developer of Mago (a Windows debugger). There's lots of work we could do here, and I think it's of gigantic impact. 4. 'ref' drives me absolutely insane. It seems so trivial, but 6 years later, I still can't pass an rvalue->ref (been discussed endlessly), create a ref local, and the separation from the type system makes it a nightmare in generic code. This was a nuisance for me on day-1, and has been grinding me down endlessly for years. It has now far eclipsed my grudges with the GC/RC, or literally anything else about the language on account of frequency of occurrence; almost daily.
Sep 23 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/23/2014 11:28 PM, Manu via Digitalmars-d wrote:
 1. Constant rejection of improvements because "OMG breaking change!".
 Meanwhile, D has been breaking my code on practically every release
 for years. I don't get this, reject changes that are deliberately
 breaking changes which would make significant improvements, but allow
 breaking changes anyway because they are bug fixes? If the release
 breaks code, then accept that fact and make some real proper breaking
 changes that make D substantially better! It is my opinion that D
 adopters don't adopt D because it's perfect just how it is and they
 don't want it to improve with time, they adopt D *because they want it
 to improve with time*! That implies an acceptance (even a welcoming)
 of breaking changes.
What change in particular?
 2. Tooling is still insufficient. I use Visual Studio, and while
 VisualD is good, it's not great. Like almost all tooling projects,
 there is only one contributor, and I think this trend presents huge
 friction to adoption. Tooling is always factored outside of the D
 community and their perceived realm of responsibility. I'd like to see
 tooling taken into the core community and issues/bugs treated just as
 seriously as issues in the compiler/language itself.

 3. Debugging is barely ever considered important. I'd love to see a
 concerted focus on making the debug experience excellent. Iain had a
 go at GDB, I understand there is great improvement there. Sadly, we
 recently lost the developer of Mago (a Windows debugger). There's lots
 of work we could do here, and I think it's of gigantic impact.
There are 23 issues tagged with 'symdeb': https://issues.dlang.org/buglist.cgi?keywords=symdeb&list_id=109316&resolution=--- If there are more untagged ones, please tag them.
 4. 'ref' drives me absolutely insane. It seems so trivial, but 6 years
 later, I still can't pass an rvalue->ref (been discussed endlessly),
 create a ref local, and the separation from the type system makes it a
 nightmare in generic code. This was a nuisance for me on day-1, and
 has been grinding me down endlessly for years. It has now far eclipsed
 my grudges with the GC/RC, or literally anything else about the
 language on account of frequency of occurrence; almost daily.
I have to ask why all your code revolves about this one thing?
Sep 24 2014
next sibling parent reply "Don" <x nospam.com> writes:
On Wednesday, 24 September 2014 at 07:43:49 UTC, Walter Bright 
wrote:
 On 9/23/2014 11:28 PM, Manu via Digitalmars-d wrote:
 1. Constant rejection of improvements because "OMG breaking 
 change!".
 Meanwhile, D has been breaking my code on practically every 
 release
 for years. I don't get this, reject changes that are 
 deliberately
 breaking changes which would make significant improvements, 
 but allow
 breaking changes anyway because they are bug fixes? If the 
 release
 breaks code, then accept that fact and make some real proper 
 breaking
 changes that make D substantially better! It is my opinion 
 that D
 adopters don't adopt D because it's perfect just how it is and 
 they
 don't want it to improve with time, they adopt D *because they 
 want it
 to improve with time*! That implies an acceptance (even a 
 welcoming)
 of breaking changes.
paranoid fear of breaking backwards compatibility. I said that in my 2013 talk. It is still true today. Sociomantic says, PLEASE BREAK OUR CODE! Get rid of the old design bugs while we still can. For example: We agreed *years* ago to remove the NCEG operators. Why haven't they been removed yet? As I said earlier in the year, one of the biggest ever breaking changes was the fix for array stomping, but it wasn't even recognized as a breaking change! Breaking changes happen all the time, and the ones that break noisily are really not a problem. "Most D code is yet to be written."
 What change in particular?
I've got a nasty feeling that you misread what he wrote. Every time we say, "breaking changes are good", you seem to hear "breaking changes are bad"! The existing D corporate users are still sympathetic to breaking changes. We are giving the language an extraordinary opportunity. And it's incredibly frustrating to watch that opportunity being wasted due to paranoia. We are holding the door open. But we can't hold it open forever, the more corporate users we get, the harder it becomes. Break our code TODAY. "Most D code is yet to be written."
Sep 24 2014
next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 24 September 2014 at 14:56:11 UTC, Don wrote:

 paranoid fear of breaking backwards compatibility. I said that 
 in my 2013 talk. It is still true today.

 Sociomantic says, PLEASE BREAK OUR CODE! Get rid of the old 
 design bugs while we still can.

 For example: We agreed *years* ago to remove the NCEG 
 operators. Why haven't they been removed yet?
Yep. So long as the messaging is clear surrounding breaking changes I'm all for it. In favor of it in fact, if it makes the language better in the long term. Dealing with breakages between compiler releases simply isn't a problem if the issues are known beforehand, particularly if the old compiler can build the new style code. D isn't unique in this respect anyway. The C++ compiler our build team uses at work is sufficiently old that it can't compile certain parts of Boost, for example. We've been pushing to get a newer compiler in place, and that comes hand in hand with code changes. But we *want* the breaking change because it actually solves problems we have with the current compiler iteration. Similarly, I want breaking changes in D if they solve problems I have with the way things currently work. I kind of feel like D is stuck in prototype mode. As in, you demo a prototype, people like what they see and say "ship it", and rather than take the time to make the real thing you *do* ship it and then immediately start working on new features, forever sticking yourself with all the warts you'd glossed over during the demo. In this case we were prototyping new features to ourselves, but the same idea applies.
Sep 24 2014
prev sibling next sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 24 September 2014 at 14:56:11 UTC, Don wrote:

 paranoid fear of breaking backwards compatibility. I said that 
 in my 2013 talk. It is still true today.

 Sociomantic says, PLEASE BREAK OUR CODE! Get rid of the old 
 design bugs while we still can.

 For example: We agreed *years* ago to remove the NCEG 
 operators. Why haven't they been removed yet?

 As I said earlier in the year, one of the biggest ever breaking 
 changes was the fix for array stomping, but it wasn't even 
 recognized as a breaking change!
 Breaking changes happen all the time, and the ones that break 
 noisily are really not a problem.

 "Most D code is yet to be written."

 What change in particular?
I've got a nasty feeling that you misread what he wrote. Every time we say, "breaking changes are good", you seem to hear "breaking changes are bad"! The existing D corporate users are still sympathetic to breaking changes. We are giving the language an extraordinary opportunity. And it's incredibly frustrating to watch that opportunity being wasted due to paranoia. We are holding the door open. But we can't hold it open forever, the more corporate users we get, the harder it becomes. Break our code TODAY. "Most D code is yet to be written."
As the CTO of a company having the main selling products "powered by D", I totally agree with Don: break our code TODAY. We are aiding ALS impaired people to have a better life with D products, so I can tell that we definitely care about SW quality... But a planned breaking change, that improves the language overall, will always be welcomed here in SR Labs. --- /Paolo
Sep 24 2014
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Don:


 paranoid fear of breaking backwards compatibility. I said that 
 in my 2013 talk. It is still true today.

 Sociomantic says, PLEASE BREAK OUR CODE! Get rid of the old 
 design bugs while we still can.
I keep a large amount of working D2 code, mostly for scientific-like usages (plus other code outside work, like almost one thousand of Rosettacode programs, a lot of lines of very carefully written good code). Most of such code is divided in many small programs. I have plenty of unittests and I also use contract programming. A planned breaking change, if it gives clear and nice error messages, allows me to fix the whole code base in a limited amount of time (1 hour, or 2 hours or a little more), and often it's an easy work that I can do late in the day when I am too tired to do more intelligent work anyway. Compared to the work to understand the problems, invent the problems, invent the solutions, and write working code, the amount of brain time & work to keep code updated is not significant for me. So I suggest to deprecate the built-in sort, deprecate other things that are waiting for it since some time, do the other language updates that you think are good. Bye, bearophile
Sep 24 2014
prev sibling next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 14:56:10 +0000
Don via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 "Most D code is yet to be written."
and it will be wrtitten in a language with heavy legacy. it's the same thing as with c++ interop: pleasing imaginary future users at the expense for current users. even small changes that either breaks something or even adds a more sane/more consistent way to do some small thing without breaking the old way have virtually no chances to get into mainline. see, for example, function attributes. neither patch that allows to use ' ' in front of "pure" and "nothrow" nor patch that allows to omit " " for " safe", " trusted" and so on was "blessed". they were destroyed almost immediately: "it's not hard to type that ' '", "there is nothing wrong in such inconsistent syntax", "newcomers will be confused by having two syntaxes" (as if they are not confused now, failing to understand why some of the attributes requires " ", and some can't be used with " "!). or 'const' function attribute, which, i believe, should be forbidden as prefix attribute. i.e. 'const A foo ()' should be compilation error. or having no way to cancel "final:" and "static:" (this annoys me virtually each time i'm writing some complex structs/classes). and so on. i'd say "change this while we can!" but no, imaginary future users will be dissatisfied.
Sep 24 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Wednesday, 24 September 2014 at 21:53:34 UTC, ketmar via 
Digitalmars-d wrote:
 On Wed, 24 Sep 2014 14:56:10 +0000
 Don via Digitalmars-d <digitalmars-d puremagic.com> wrote:
 almost immediately: "it's not hard to type that ' '",
Actually, on the French keyboard, it is. The '\' too.
Sep 24 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 21:59:08 +0000
eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 almost immediately: "it's not hard to type that ' '",
Actually, on the French keyboard, it is. The '\' too.
and i'm for adding more " "... sorry to all French people. ;-)
Sep 24 2014
prev sibling next sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 25 September 2014 00:56, Don via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Wednesday, 24 September 2014 at 07:43:49 UTC, Walter Bright wrote:
 On 9/23/2014 11:28 PM, Manu via Digitalmars-d wrote:
 1. Constant rejection of improvements because "OMG breaking change!".
 Meanwhile, D has been breaking my code on practically every release
 for years. I don't get this, reject changes that are deliberately
 breaking changes which would make significant improvements, but allow
 breaking changes anyway because they are bug fixes? If the release
 breaks code, then accept that fact and make some real proper breaking
 changes that make D substantially better! It is my opinion that D
 adopters don't adopt D because it's perfect just how it is and they
 don't want it to improve with time, they adopt D *because they want it
 to improve with time*! That implies an acceptance (even a welcoming)
 of breaking changes.
fear of breaking backwards compatibility. I said that in my 2013 talk. It is still true today. Sociomantic says, PLEASE BREAK OUR CODE! Get rid of the old design bugs while we still can. For example: We agreed *years* ago to remove the NCEG operators. Why haven't they been removed yet? As I said earlier in the year, one of the biggest ever breaking changes was the fix for array stomping, but it wasn't even recognized as a breaking change! Breaking changes happen all the time, and the ones that break noisily are really not a problem. "Most D code is yet to be written."
 What change in particular?
I've got a nasty feeling that you misread what he wrote. Every time we say, "breaking changes are good", you seem to hear "breaking changes are bad"! The existing D corporate users are still sympathetic to breaking changes. We are giving the language an extraordinary opportunity. And it's incredibly frustrating to watch that opportunity being wasted due to paranoia. We are holding the door open. But we can't hold it open forever, the more corporate users we get, the harder it becomes. Break our code TODAY. "Most D code is yet to be written."
Oh good, I'm glad you're reading! :) This was our unanimous feeling at Remedy too. I think all D users want to see the language become clean, tidy and uniform.
Sep 24 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 7:56 AM, Don wrote:
 For example: We agreed *years* ago to remove the NCEG operators. Why haven't
 they been removed yet?
They do generate a warning if compiled with -w.
 What change in particular?
I've got a nasty feeling that you misread what he wrote. Every time we say, "breaking changes are good", you seem to hear "breaking changes are bad"!
It would be helpful having a list of what breaking changes you had in mind.
Sep 24 2014
next sibling parent reply "Don" <x nospam.com> writes:
On Thursday, 25 September 2014 at 00:52:25 UTC, Walter Bright 
wrote:
 On 9/24/2014 7:56 AM, Don wrote:
 For example: We agreed *years* ago to remove the NCEG 
 operators. Why haven't
 they been removed yet?
They do generate a warning if compiled with -w.
They should be gone completely. So should built-in sort. I think there's something important that you haven't grasped yet. It was something I didn't really appreciate before working here. ** Keeping deprecated features alive is expensive. ** As long as deprecated features still exist, they impose a cost. Not just on the language maintainers, but also on the users. On anyone writing a language parser - so for example on text editors. On anyone training new employees. And there's a little cognitive burden on all language users.
 What change in particular?
I've got a nasty feeling that you misread what he wrote. Every time we say, "breaking changes are good", you seem to hear "breaking changes are bad"!
It would be helpful having a list of what breaking changes you had in mind.
C-style declarations. Builtin sort and reverse. NCEG operators. Built-in complex types. float.min. property. Basically, anything where it has been decided that it should be removed. Some of these things have been hanging around for six years. I'd also like to see us getting rid of those warts like assert(float.nan) being true. And adding a in front of pure, nothrow. Ask yourself, if D had no users other than you, so that you break *anything*, what would you remove? Make a wishlist, and then find out what's possible. Remember, when you did that before, we successfully got rid of 'bit', and there was practically no complaint. Any breaking change where it fails to compile, and where there's an essentially mechanical solution, are really not a problem. Subtle changes in semantics are the ones that are disastrous. We want to pay the one-off cost of fixing our code, so that we can get the long term return-on-investment of a cleaner language.
Sep 25 2014
next sibling parent Johannes Pfau <nospam example.com> writes:
Am Thu, 25 Sep 2014 11:08:23 +0000
schrieb "Don" <x nospam.com>:

 They should be gone completely. So should built-in sort.
 I think there's something important that you haven't grasped yet. 
 It was something I didn't really appreciate before working here.
 
   ** Keeping deprecated features alive is expensive. **
 
 As long as deprecated features still exist, they impose a cost. 
 Not just on the language maintainers, but also on the users. On 
 anyone writing a language parser - so for example on text 
 editors. On anyone training new employees.
This reminds me of a conversation on stackoverflow: http://stackoverflow.com/a/25864699
 However, note that the built-in .sort property/function is on its way
 to being deprecated. Please use std.algorithm.sort instead, which
 shouldn't have this issue.
 I was aiming for std.algorithm.sort, I didn't realize there is a
 built-in one...
Sep 25 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 4:08 AM, Don wrote:
 C-style declarations. Builtin sort and reverse. NCEG operators. Built-in
 complex types. float.min.  property.
That's a good list I agree with. FWIW I'm glad no random name changes. I've recently used Rust a bit and the curse of D users as of 6-7 years ago reached me: most code I download online doesn't compile for obscure reasons, it's nigh impossible to figure out what the fix is from the compiler error message, searching online finds outdated documentation that tells me the code should work, and often it's random name changes (from_iterator to from_iter and such, or names are moved from one namespace to another). For the stuff we eliminate we should provide good error messages that recommend the respective idiomatic solutions. Andrei
Sep 25 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 6:49 AM, Andrei Alexandrescu wrote:
 FWIW I'm glad no random name changes. I've
 recently used Rust a bit and the curse of D users as of 6-7 years ago reached
 me: most code I download online doesn't compile for obscure reasons, it's nigh
 impossible to figure out what the fix is from the compiler error message,
 searching online finds outdated documentation that tells me the code should
 work, and often it's random name changes (from_iterator to from_iter and such,
 or names are moved from one namespace to another).
The name changes cause much disruption and are ultimately pointless changes.
 For the stuff we eliminate we should provide good error messages that recommend
 the respective idiomatic solutions.
That's normal practice already.
Sep 25 2014
prev sibling parent reply "Brian Rogoff" <brogoff gmail.com> writes:
On Thursday, 25 September 2014 at 13:49:00 UTC, Andrei 
Alexandrescu wrote:
 I've recently used Rust a bit and the curse of D users as of 
 6-7 years ago reached me: most code I download online doesn't 
 compile for obscure reasons, it's nigh impossible to figure out 
 what the fix is from the compiler error message, searching 
 online finds outdated documentation that tells me the code 
 should work, and often it's random name changes (from_iterator 
 to from_iter and such, or names are moved from one namespace to 
 another).
That's more than a bit unfair. Rust's developers have made it abundantly clear that things will keep changing until version 1.0. If you want to play with some Rust that's guaranteed to work, go to http://www.rust-ci.org and find a bit code that interests you which isn't failing, and then download the nightly. The docs on the Rust home page are either for a fixed version (0.11.0) or the nightly. Let's wait for a bit of time after 1.0 is out before you critique the annoying changes; they deliberately are developing in the open to get community input and avoid getting stuck with too many mistakes (though it looks like they are stuck with C++ template syntax, ugh!). So far, I haven't found it too hard to update code, and they've been good at marking breaking changes as breaking changes, which can be searched for with git. In the case of D, the main D2 book was published 4 years ago and that should correspond to Rust 1.0 or even later, since D already had a D1 to shake out the mistakes and bad namings. That's gone perfectly, with no code breakage between releases during those four years, right?
Sep 25 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 4:07 PM, Brian Rogoff wrote:
 On Thursday, 25 September 2014 at 13:49:00 UTC, Andrei Alexandrescu wrote:
 I've recently used Rust a bit and the curse of D users as of 6-7 years
 ago reached me: most code I download online doesn't compile for
 obscure reasons, it's nigh impossible to figure out what the fix is
 from the compiler error message, searching online finds outdated
 documentation that tells me the code should work, and often it's
 random name changes (from_iterator to from_iter and such, or names are
 moved from one namespace to another).
That's more than a bit unfair.
This is not about fairness. -- Andrei
Sep 25 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 4:08 AM, Don wrote:
 [...]
I agree with Andrei, it's a good list. I'll move these issues to the next step in the removal process.
 I'd also like to see us getting rid of those warts like assert(float.nan) being
 true.
See discussion: https://issues.dlang.org/show_bug.cgi?id=13489
 Ask yourself, if D had no users other than you, so that you break *anything*,
 what would you remove? Make a wishlist, and then find out what's possible.
 Remember, when you did that before, we successfully got rid of 'bit', and there
 was practically no complaint.
Top of my list would be the auto-decoding behavior of std.array.front() on character arrays. Every time I'm faced with that I want to throw a chair through the window. Probably second would be having const and purity by default.
Sep 25 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 12:40:28PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/25/2014 4:08 AM, Don wrote:
[...]
Ask yourself, if D had no users other than you, so that you break
*anything*, what would you remove? Make a wishlist, and then find out
what's possible.  Remember, when you did that before, we successfully
got rid of 'bit', and there was practically no complaint.
Top of my list would be the auto-decoding behavior of std.array.front() on character arrays. Every time I'm faced with that I want to throw a chair through the window.
LOL... now I'm genuinely curious what's Andrei's comment on this. :-P Last I heard, Andrei was against removing autodecoding.
 Probably second would be having const and purity by default.
Some of this could be mitigated if we expanded the sphere of attribute inference. I know people hated the idea of auto == infer attributes, but I personally support it. Perhaps an alternate route to that is to introduce a auto (or whatever you wanna call it, infer, or whatever) and promote its use in D code, then slowly phase out functions not marked with infer. After a certain point, infer will become the default, and explicit infer's will become no-op, and then subsequently dropped. This is very ugly, though. I much prefer extending auto to mean infer. T -- The peace of mind---from knowing that viruses which exploit Microsoft system vulnerabilities cannot touch Linux---is priceless. -- Frustrated system administrator.
Sep 25 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 12:58 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Sep 25, 2014 at 12:40:28PM -0700, Walter Bright via Digitalmars-d
wrote:
 On 9/25/2014 4:08 AM, Don wrote:
[...]
 Ask yourself, if D had no users other than you, so that you break
 *anything*, what would you remove? Make a wishlist, and then find out
 what's possible.  Remember, when you did that before, we successfully
 got rid of 'bit', and there was practically no complaint.
Top of my list would be the auto-decoding behavior of std.array.front() on character arrays. Every time I'm faced with that I want to throw a chair through the window.
LOL... now I'm genuinely curious what's Andrei's comment on this. :-P Last I heard, Andrei was against removing autodecoding.
I have yet to completely convince Andrei that autodecoding is a bad idea :-(
Sep 25 2014
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Thursday, 25 September 2014 at 21:03:53 UTC, Walter Bright 
wrote:
 On 9/25/2014 12:58 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Sep 25, 2014 at 12:40:28PM -0700, Walter Bright via 
 Digitalmars-d wrote:
 On 9/25/2014 4:08 AM, Don wrote:
[...]
 Ask yourself, if D had no users other than you, so that you 
 break
 *anything*, what would you remove? Make a wishlist, and then 
 find out
 what's possible.  Remember, when you did that before, we 
 successfully
 got rid of 'bit', and there was practically no complaint.
Top of my list would be the auto-decoding behavior of std.array.front() on character arrays. Every time I'm faced with that I want to throw a chair through the window.
LOL... now I'm genuinely curious what's Andrei's comment on this. :-P Last I heard, Andrei was against removing autodecoding.
I have yet to completely convince Andrei that autodecoding is a bad idea :-(
I think it should just refuse to work on char[], wchar[] and dchar[]. Instead, byCodeUnit, byCodePoint (which already exist) would be required. This way, users would need to make a conscious decision, and there would be no surprises and no negative performance impact.
Sep 25 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 09:19:29PM +0000, via Digitalmars-d wrote:
 On Thursday, 25 September 2014 at 21:03:53 UTC, Walter Bright wrote:
[...]
I have yet to completely convince Andrei that autodecoding is a bad
idea :-(
It certainly represents a runtime overhead, which may be non-negligible depending on your particular use case, and despite oft stated benefits of being Unicode-aware by default (well, not 100%), I think the cost of maintaining special-casing for narrow strings is starting to show up in the uglification of Phobos range-based code. Ranges were supposed to help with writing cleaner, more generic code, but I've been observing a trend of special-casing in order to reduce the autodecoding overhead in string handling, which somewhat reduces the cleanness of otherwise fully-generic code. This complexity has led to string-related bugs, and definitely has a cost on the readability (and, consequently, maintainability) of Phobos code. Special cases hurt generic code. It's not just about performance.
 I think it should just refuse to work on char[], wchar[] and dchar[].
 Instead, byCodeUnit, byCodePoint (which already exist) would be
 required.  This way, users would need to make a conscious decision,
 and there would be no surprises and no negative performance impact.
Not a bad idea. If we do it right, we could (mostly) avoid user outrage. E.g., start with a "soft deprecation" (a compile-time message, but not an actual warning, to the effect that byCodeUnit / byCodePoint should be used with strings from now on), then a warning, then an actual deprecation, then remove autodecoding code from Phobos algorithms (leaving only byCodePoint for those who still want autodecoding). T -- Two wrongs don't make a right; but three rights do make a left...
Sep 25 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 2:47 PM, H. S. Teoh via Digitalmars-d wrote:
 Not a bad idea. If we do it right, we could (mostly) avoid user outrage.
 E.g., start with a "soft deprecation" (a compile-time message, but not
 an actual warning, to the effect that byCodeUnit / byCodePoint should be
 used with strings from now on), then a warning, then an actual
 deprecation, then remove autodecoding code from Phobos algorithms
 (leaving only byCodePoint for those who still want autodecoding).
Consider this PR: https://github.com/D-Programming-Language/phobos/pull/2423 which is blocked because several people do not agree with using byCodeunit.
Sep 25 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 8:11 PM, Walter Bright wrote:
 Consider this PR:

 https://github.com/D-Programming-Language/phobos/pull/2423

 which is blocked because several people do not agree with using byCodeunit.
I should add that this impasse has COMPLETELY stalled changes to Phobos to remove dependency on the GC.
Sep 25 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 8:17 PM, Walter Bright wrote:
 On 9/25/2014 8:11 PM, Walter Bright wrote:
 Consider this PR:

 https://github.com/D-Programming-Language/phobos/pull/2423

 which is blocked because several people do not agree with using
 byCodeunit.
I should add that this impasse has COMPLETELY stalled changes to Phobos to remove dependency on the GC.
I think the way to break that stalemate is to add RC strings and reference counted exceptions. -- Andrei
Sep 25 2014
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 08:44:23PM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 9/25/14, 8:17 PM, Walter Bright wrote:
On 9/25/2014 8:11 PM, Walter Bright wrote:
Consider this PR:

https://github.com/D-Programming-Language/phobos/pull/2423

which is blocked because several people do not agree with using
byCodeunit.
I should add that this impasse has COMPLETELY stalled changes to Phobos to remove dependency on the GC.
I think the way to break that stalemate is to add RC strings and reference counted exceptions. -- Andrei
But it does nothing to bring us closer to a decision about the autodecoding issue. T -- Heads I win, tails you lose.
Sep 25 2014
prev sibling parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Friday, 26 September 2014 at 03:44:23 UTC, Andrei Alexandrescu 
wrote:
 I think the way to break that stalemate is to add RC strings 
 and reference counted exceptions. -- Andrei
I dont want gc, exceptions or rc strings. You really need to make sure rc is optional throughout. RC inc/dec abort read-transactions. That's a bad long term strategy.
Sep 25 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:m02lt5$2hg4$1 digitalmars.com...

 I should add that this impasse has COMPLETELY stalled changes to Phobos to 
 remove dependency on the GC.
Maybe it would be more successful if it didn't try to do both at once.
Sep 25 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 9:12 PM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:m02lt5$2hg4$1 digitalmars.com...

 I should add that this impasse has COMPLETELY stalled changes to Phobos to
 remove dependency on the GC.
Maybe it would be more successful if it didn't try to do both at once.
What would the 3rd version of setExtension be named, then?
Sep 25 2014
next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:m02qcm$2mmn$1 digitalmars.com...

 On 9/25/2014 9:12 PM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:m02lt5$2hg4$1 digitalmars.com...

 I should add that this impasse has COMPLETELY stalled changes to Phobos
to
 remove dependency on the GC.
Maybe it would be more successful if it didn't try to do both at once.
What would the 3rd version of setExtension be named, then?
setExtension. Making up new clever names for functions that do the same thing with different types is a burden for the users.
Sep 25 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 9:38 PM, Daniel Murphy wrote:
 "Walter Bright"  wrote in message news:m02qcm$2mmn$1 digitalmars.com...
 What would the 3rd version of setExtension be named, then?
setExtension. Making up new clever names for functions that do the same thing with different types is a burden for the users.
The behavior would be different, although the types are the same.
Sep 26 2014
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 09:34:30PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/25/2014 9:12 PM, Daniel Murphy wrote:
"Walter Bright"  wrote in message news:m02lt5$2hg4$1 digitalmars.com...

I should add that this impasse has COMPLETELY stalled changes to
Phobos to remove dependency on the GC.
Maybe it would be more successful if it didn't try to do both at once.
What would the 3rd version of setExtension be named, then?
I think that PR, and the others slated to follow it, should just merge with autodecoding in conformance to the rest of Phobos right now, independently of however the decision on the autodecoding issue will turn out. If we do decide eventually to get rid of autodecoding, we're gonna have to rewrite much of Phobos anyway, so it's not as though merging PRs now is going to make it significantly worse. I don't see why the entire burden of deciding upon autodecoding should rest upon a PR that merely introduces a new string function. T -- Unix was not designed to stop people from doing stupid things, because that would also stop them from doing clever things. -- Doug Gwyn
Sep 25 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Sep 25, 2014 at 08:11:02PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/25/2014 2:47 PM, H. S. Teoh via Digitalmars-d wrote:
Not a bad idea. If we do it right, we could (mostly) avoid user
outrage.  E.g., start with a "soft deprecation" (a compile-time
message, but not an actual warning, to the effect that byCodeUnit /
byCodePoint should be used with strings from now on), then a warning,
then an actual deprecation, then remove autodecoding code from Phobos
algorithms (leaving only byCodePoint for those who still want
autodecoding).
Consider this PR: https://github.com/D-Programming-Language/phobos/pull/2423 which is blocked because several people do not agree with using byCodeunit.
Actually, several committers have already agreed that this particular PR shouldn't be blocked pending the decision whether to autodecode or not. What's actually blocking it right now, is that it calls stripExtension, which only works with arrays, not general ranges. (Speaking of which, thanks for reminding me that I need to work on that. :-P) T -- The fact that anyone still uses AOL shows that even the presence of options doesn't stop some people from picking the pessimal one. - Mike Ellis
Sep 25 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 9:04 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Sep 25, 2014 at 08:11:02PM -0700, Walter Bright via Digitalmars-d
wrote:
 Consider this PR:

 https://github.com/D-Programming-Language/phobos/pull/2423

 which is blocked because several people do not agree with using
 byCodeunit.
Actually, several committers have already agreed that this particular PR shouldn't be blocked pending the decision whether to autodecode or not. What's actually blocking it right now, is that it calls stripExtension, which only works with arrays, not general ranges. (Speaking of which, thanks for reminding me that I need to work on that. :-P)
It's blocked because of the autodecode issue. Don't want to need a THIRD setExtension algorithm because this one isn't done correctly.
Sep 25 2014
prev sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thursday, 25 September 2014 at 21:49:43 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 It's not just about performance.
Something I recently realized: because of auto-decoding, std.algorithm.find("foo", 'o') cannot be implemented using memchr. I think this points to a huge design fail, performance-wise. There are also subtle correctness problems: haystack[0..haystack.countUntil(needle)] is wrong, even if it works right on ASCII input. For once I agree with Walter Bright - regarding the chair throwing :)
Sep 25 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Sep 26, 2014 at 04:05:18AM +0000, Vladimir Panteleev via Digitalmars-d
wrote:
 On Thursday, 25 September 2014 at 21:49:43 UTC, H. S. Teoh via Digitalmars-d
 wrote:
It's not just about performance.
Something I recently realized: because of auto-decoding, std.algorithm.find("foo", 'o') cannot be implemented using memchr. I think this points to a huge design fail, performance-wise.
Well, if you really want to talk performance, we've already failed. Any string operation that starts from a narrow string and ends with a narrow string (of the same width) will incur the overhead of decoding / reencoding *every single character*, even if it's mostly redundant. What bugs me even more is the fact that every single Phobos algorithm that might conceivably deal with characters has to be special-cased for narrow string in order to be performant. That's a mighty high price to pay for what's a relatively small benefit -- note that autodecoding does *not* guarantee Unicode correctness, even if, according to the argument of some, it helps. So we're paying a high price in terms of performance and code maintainability in Phobos, for the dubious benefit of only partial Unicode conformance.
 There are also subtle correctness problems:
 haystack[0..haystack.countUntil(needle)] is wrong, even if it works
 right on ASCII input.
 
 For once I agree with Walter Bright - regarding the chair throwing :)
Not to mention that autodecoding *still* doesn't fix the following problem: assert("á".canFind("á")); // fails (Note: you may need to save this message verbatim and edit it into a D source file to see this effect; cut-n-paste on some systems may erase the effect.) And the only way to fix this would be so prohibitively expensive, I don't think even Andrei would agree to it. :-P So basically, we're paying (1) lower performance, (2) non-random access for strings, (3) subtle distinction between index and count and other such gotchas, and (4) tons of special-cased Phobos code with the associated maintenance costs, all for incomplete Unicode correctness. Doesn't seem like the benefit measures up to the cost. :-( T -- We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare. Now, thanks to the Internet, we know this is not true. -- Robert Wilensk
Sep 25 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Friday, 26 September 2014 at 04:37:06 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 So basically, we're paying (1) lower performance, (2)  
 non-random access
 for strings, (3) subtle distinction between index and count and 
 other
 such gotchas, and (4) tons of special-cased Phobos code with the
 associated maintenance costs, all for incomplete Unicode 
 correctness.
 Doesn't seem like the benefit measures up to the cost. :-(
Yep. When I use algorithms on strings in D, I always cast them to ubyte[]. Which is a poor solution.
Sep 26 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Sean Kelly:

 When I use algorithms on strings in D, I always cast them to 
 ubyte[].  Which is a poor solution.
In Phobos we have "representation", and "assumeUTF" that are better than naked casts. I use them only sparingly (and I avoid cast), despite I use strings often. Bye, bearophile
Sep 26 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/26/2014 6:55 AM, Sean Kelly wrote:
 On Friday, 26 September 2014 at 04:37:06 UTC, H. S. Teoh via Digitalmars-d
wrote:
 So basically, we're paying (1) lower performance, (2) non-random access
 for strings, (3) subtle distinction between index and count and other
 such gotchas, and (4) tons of special-cased Phobos code with the
 associated maintenance costs, all for incomplete Unicode correctness.
 Doesn't seem like the benefit measures up to the cost. :-(
Yep. When I use algorithms on strings in D, I always cast them to ubyte[]. Which is a poor solution.
Now you can use the adapters .byCodeUnit or .byChar instead.
Sep 26 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 9:05 PM, Vladimir Panteleev wrote:
 On Thursday, 25 September 2014 at 21:49:43 UTC, H. S. Teoh via
 Digitalmars-d wrote:
 It's not just about performance.
Something I recently realized: because of auto-decoding, std.algorithm.find("foo", 'o') cannot be implemented using memchr.
Why not? -- Andrei
Sep 25 2014
parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Friday, 26 September 2014 at 05:02:08 UTC, Andrei Alexandrescu 
wrote:
 On 9/25/14, 9:05 PM, Vladimir Panteleev wrote:
 On Thursday, 25 September 2014 at 21:49:43 UTC, H. S. Teoh via
 Digitalmars-d wrote:
 It's not just about performance.
Something I recently realized: because of auto-decoding, std.algorithm.find("foo", 'o') cannot be implemented using memchr.
Why not? -- Andrei
Sorry, got mixed up. countUntil can't, find can (but because one's counting code points, they can't be implemented on top of each other, even for arrays).
Sep 25 2014
prev sibling parent Marco Leise <Marco.Leise gmx.de> writes:
Am Thu, 25 Sep 2014 21:19:29 +0000
schrieb "Marc Sch=C3=BCtz" <schuetzm gmx.net>:

 I think it should just refuse to work on char[], wchar[] and=20
 dchar[]. Instead, byCodeUnit, byCodePoint (which already exist)=20
 would be required. This way, users would need to make a conscious=20
 decision, and there would be no surprises and no negative=20
 performance impact.
Hey, that was what I proposed for Rust over a year ago: https://github.com/rust-lang/rust/issues/7043#issuecomment-19187984 --=20 Marco
Oct 04 2014
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 25/09/14 13:08, Don wrote:

 C-style declarations. Builtin sort and reverse. NCEG operators. Built-in
 complex types. float.min.  property.
Let me add: base class protection. It's deprecated but not completely removed. I have never seen base class protection being used in practice. -- /Jacob Carlborg
Sep 25 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 4:08 AM, Don wrote:
 I'd also like to see us getting rid of those warts like assert(float.nan) being
 true.
https://issues.dlang.org/show_bug.cgi?id=13489 It has some serious issues with it - I suspect it'll cause uglier problems than it fixes.
 And adding a   in front of pure, nothrow.
https://issues.dlang.org/show_bug.cgi?id=13388 It has generated considerable discussion.
Oct 08 2014
parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Thursday, 9 October 2014 at 00:30:53 UTC, Walter Bright wrote:
 On 9/25/2014 4:08 AM, Don wrote:

 And adding a   in front of pure, nothrow.
https://issues.dlang.org/show_bug.cgi?id=13388 It has generated considerable discussion.
Please break the language, now. --- /Paolo
Oct 08 2014
prev sibling parent "ixid" <nuaccount gmail.com> writes:
On Thursday, 25 September 2014 at 00:52:25 UTC, Walter Bright 
wrote:
 On 9/24/2014 7:56 AM, Don wrote:
 For example: We agreed *years* ago to remove the NCEG 
 operators. Why haven't
 they been removed yet?
They do generate a warning if compiled with -w.
 What change in particular?
I've got a nasty feeling that you misread what he wrote. Every time we say, "breaking changes are good", you seem to hear "breaking changes are bad"!
It would be helpful having a list of what breaking changes you had in mind.
Perhaps it would be a good idea to have a themed update? Currently you and Andrei are busy with the C++ changes, when that is settling down maybe the community could focus on a cleaning house update. With clear update themes (which does not preclude the usual mixture of things also going on) people can air issues and get it ye or nay'd clearly. Combining the previously discussed auto-update code tool with a set of breaking changes would make sense.
Oct 06 2014
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 24 September 2014 17:43, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 9/23/2014 11:28 PM, Manu via Digitalmars-d wrote:
 1. Constant rejection of improvements because "OMG breaking change!".
 Meanwhile, D has been breaking my code on practically every release
 for years. I don't get this, reject changes that are deliberately
 breaking changes which would make significant improvements, but allow
 breaking changes anyway because they are bug fixes? If the release
 breaks code, then accept that fact and make some real proper breaking
 changes that make D substantially better! It is my opinion that D
 adopters don't adopt D because it's perfect just how it is and they
 don't want it to improve with time, they adopt D *because they want it
 to improve with time*! That implies an acceptance (even a welcoming)
 of breaking changes.
What change in particular?
The instances this has been used as a defence are innumerable. Perhaps the one that's closest to me was that time when I spent months arguing final-by-default, won unanimous support of the community, you approved it, patch written, LGTM, merged, then Andrei appeared with "When did this happen? I never would have agreed to this! Revert it immediately!" (which apparently he has the authority to do, but that's a separate matter). He later said that "if it were that way from the start, I agree it should have been that way". It was reverted because breaking change, despite that it was an established fact that with virtual-by-default, any effort to optimise a library is also a breaking change and a major version increment. Most D code is yet to be written. We reject a single breaking change now, and by doing so, commit to endless breaking changes in the future. Of course, there are many more. Things like C-style arrays should be removed, property looks like it will never be finished/fixed. I expect changes to 'ref' will probably receive this same defence. I'm sure everybody here can add extensively to this list. Phobos is full of things that should be tidied up.
 2. Tooling is still insufficient. I use Visual Studio, and while
 VisualD is good, it's not great. Like almost all tooling projects,
 there is only one contributor, and I think this trend presents huge
 friction to adoption. Tooling is always factored outside of the D
 community and their perceived realm of responsibility. I'd like to see
 tooling taken into the core community and issues/bugs treated just as
 seriously as issues in the compiler/language itself.

 3. Debugging is barely ever considered important. I'd love to see a
 concerted focus on making the debug experience excellent. Iain had a
 go at GDB, I understand there is great improvement there. Sadly, we
 recently lost the developer of Mago (a Windows debugger). There's lots
 of work we could do here, and I think it's of gigantic impact.
There are 23 issues tagged with 'symdeb': https://issues.dlang.org/buglist.cgi?keywords=symdeb&list_id=109316&resolution=--- If there are more untagged ones, please tag them.
The fact there's only 23 doesn't really mean anything, they're all major usability problems. I feel like I'm back in the early 90's when trying to iterate on my D code. These issues have proven to be the most likely to send my professional friends/colleagues running upon initial contact with D. Here's some more: https://issues.dlang.org/show_bug.cgi?id=12899 https://issues.dlang.org/show_bug.cgi?id=13198 https://issues.dlang.org/show_bug.cgi?id=13213 https://issues.dlang.org/show_bug.cgi?id=13227 https://issues.dlang.org/show_bug.cgi?id=13243 https://issues.dlang.org/show_bug.cgi?id=11541 https://issues.dlang.org/show_bug.cgi?id=11549 https://issues.dlang.org/show_bug.cgi?id=11902 **** MASSIVE NUISANCE https://issues.dlang.org/show_bug.cgi?id=12163 **** MASSIVE NUISANCE https://issues.dlang.org/show_bug.cgi?id=12244 **** MASSIVE NUISANCE The last 3 make debugging of anything but the simplest D code practically impossible/pointless. Aside from that though, this somewhat leads back to my second point, which is that symdeb issues in the compiler aren't enough. It needs to be taken wholistically. Cooperation between the compiler and tooling devs need to be actively engaged to fix many of these issues.
 4. 'ref' drives me absolutely insane. It seems so trivial, but 6 years
 later, I still can't pass an rvalue->ref (been discussed endlessly),
 create a ref local, and the separation from the type system makes it a
 nightmare in generic code. This was a nuisance for me on day-1, and
 has been grinding me down endlessly for years. It has now far eclipsed
 my grudges with the GC/RC, or literally anything else about the
 language on account of frequency of occurrence; almost daily.
I have to ask why all your code revolves about this one thing?
I suspect it's because I rely on far more C++ interop than the average D user. I have half a decade of solid experience with D<->C++ interop, perhaps more than anyone else here? It's not 'all my code', but a sufficient quantity that it pops up and bites me almost every day, particularly when I try and write any meta.
Sep 24 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 2:44 PM, Manu via Digitalmars-d wrote:
 The fact there's only 23 doesn't really mean anything, they're all
 major usability problems.
 I feel like I'm back in the early 90's when trying to iterate on my D code.
 These issues have proven to be the most likely to send my professional
 friends/colleagues running upon initial contact with D.

 Here's some more:

 https://issues.dlang.org/show_bug.cgi?id=12899
 https://issues.dlang.org/show_bug.cgi?id=13198
 https://issues.dlang.org/show_bug.cgi?id=13213
 https://issues.dlang.org/show_bug.cgi?id=13227
 https://issues.dlang.org/show_bug.cgi?id=13243
 https://issues.dlang.org/show_bug.cgi?id=11541
 https://issues.dlang.org/show_bug.cgi?id=11549
 https://issues.dlang.org/show_bug.cgi?id=11902 **** MASSIVE NUISANCE
 https://issues.dlang.org/show_bug.cgi?id=12163 **** MASSIVE NUISANCE
 https://issues.dlang.org/show_bug.cgi?id=12244 **** MASSIVE NUISANCE
Thanks for tagging them.
 The last 3 make debugging of anything but the simplest D code
 practically impossible/pointless.


 Aside from that though, this somewhat leads back to my second point,
 which is that symdeb issues in the compiler aren't enough. It needs to
 be taken wholistically.
 Cooperation between the compiler and tooling devs need to be actively
 engaged to fix many of these issues.
I'm sorry, but this is awfully vague and contains nothing actionable.
 I suspect it's because I rely on far more C++ interop than the average
 D user. I have half a decade of solid experience with D<->C++ interop,
 perhaps more than anyone else here?
 It's not 'all my code', but a sufficient quantity that it pops up and
 bites me almost every day, particularly when I try and write any meta.
I still don't understand what use case is it that pops up every day. What are you trying to do? Why doesn't auto ref work?
Sep 24 2014
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 25 September 2014 11:01, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 9/24/2014 2:44 PM, Manu via Digitalmars-d wrote:
 The fact there's only 23 doesn't really mean anything, they're all
 major usability problems.
 I feel like I'm back in the early 90's when trying to iterate on my D
 code.
 These issues have proven to be the most likely to send my professional
 friends/colleagues running upon initial contact with D.

 Here's some more:

 https://issues.dlang.org/show_bug.cgi?id=12899
 https://issues.dlang.org/show_bug.cgi?id=13198
 https://issues.dlang.org/show_bug.cgi?id=13213
 https://issues.dlang.org/show_bug.cgi?id=13227
 https://issues.dlang.org/show_bug.cgi?id=13243
 https://issues.dlang.org/show_bug.cgi?id=11541
 https://issues.dlang.org/show_bug.cgi?id=11549
 https://issues.dlang.org/show_bug.cgi?id=11902 **** MASSIVE NUISANCE
 https://issues.dlang.org/show_bug.cgi?id=12163 **** MASSIVE NUISANCE
 https://issues.dlang.org/show_bug.cgi?id=12244 **** MASSIVE NUISANCE
Thanks for tagging them.
 The last 3 make debugging of anything but the simplest D code
 practically impossible/pointless.


 Aside from that though, this somewhat leads back to my second point,
 which is that symdeb issues in the compiler aren't enough. It needs to
 be taken wholistically.
 Cooperation between the compiler and tooling devs need to be actively
 engaged to fix many of these issues.
I'm sorry, but this is awfully vague and contains nothing actionable.
The action I'd love to see would be "Yes, debugging is important, we should add it at a high priority on the roadmap and encourage the language community to work with the tooling community to make sure the experience is polished" ;) I recognise that is probably unrealistic, because it seems so few people even use symbolic debuggers at all, that we have very few people interested in working on it. I don't really have a practical solution, but the post topic is 'the worst parts of D', and for me, this definitely rates very high :) An excellent action would be to implement proper scoping in the debuginfo, that would fix cursor movement while stepping, differently scoped local's with the same names causing havoc. And also classes not working. Those are some big tickets from the language side responsible for the most trouble.
 I suspect it's because I rely on far more C++ interop than the average
 D user. I have half a decade of solid experience with D<->C++ interop,
 perhaps more than anyone else here?
 It's not 'all my code', but a sufficient quantity that it pops up and
 bites me almost every day, particularly when I try and write any meta.
I still don't understand what use case is it that pops up every day. What are you trying to do? Why doesn't auto ref work?
auto ref makes a reference out of int and float. There are all manner of primitive types and very small structs that shouldn't be ref, and auto ref can't possibly know. auto ref has never to date produced the semantics that I have wanted. If ref is part of the type, it flows through templates neatly, it is also possible to use app-specific logic to decide if something should be ref or not, and then give that argument without static if and duplication of entire functions, or text mixin.
Sep 24 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 7:50 PM, Manu via Digitalmars-d wrote:
 I'm sorry, but this is awfully vague and contains nothing actionable.
The action I'd love to see would be "Yes, debugging is important, we should add it at a high priority on the roadmap and encourage the language community to work with the tooling community to make sure the experience is polished" ;)
I make similar statements all the time. It doesn't result in action on anyone's part. I don't tell people what to do - they work on aspects of D that interest them. Even people who ask me what to work on never follow my suggestions. They work on whatever floats their boat. It's my biggest challenge working on free software :-)
 I recognise that is probably unrealistic, because it seems so few
 people even use symbolic debuggers at all, that we have very few
 people interested in working on it.
You kinda put your finger on what the real issue is. Note that I find gdb well nigh unusable even for C++ code, so to me an unusable debugger is pretty normal and I don't think much about it. :-) It doesn't impair my debugging sessions much. I've also found that the more high level abstractions are used, the less useful a symbolic debugger is. Symbolic debuggers are only good for pedestrian, low level code that ironically is also what other methods are very good at, too.
 I don't really have a practical solution, but the post topic is 'the
 worst parts of D', and for me, this definitely rates very high :)
No prob. The initiating post was an invitation to a wine festival, and that's what we have :-)
 I still don't understand what use case is it that pops up every day. What
 are you trying to do? Why doesn't auto ref work?
auto ref makes a reference out of int and float. There are all manner of primitive types and very small structs that shouldn't be ref,
Why not? What does it harm? And with inlining, pointless refs should be optimized away.
 and auto ref can't possibly know.
Can't know what?
 auto ref has never to date produced the semantics that I have wanted.
Please be more specific about what and why.
 If ref is part of the type, it flows through templates neatly, it is
 also possible to use app-specific logic to decide if something should
 be ref or not, and then give that argument without static if and
 duplication of entire functions, or text mixin.
It seems you are very focused on very low level details. Not knowing specifically what and why you're focused on ref/value, I suggest the possibility of taking a larger view, focus more on algorithms, and rely on the inliner more. Take a look at the asm code generated now and then and see if your worries are justified.
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 08:55:23PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/24/2014 7:50 PM, Manu via Digitalmars-d wrote:
I'm sorry, but this is awfully vague and contains nothing
actionable.
The action I'd love to see would be "Yes, debugging is important, we should add it at a high priority on the roadmap and encourage the language community to work with the tooling community to make sure the experience is polished" ;)
I make similar statements all the time. It doesn't result in action on anyone's part. I don't tell people what to do - they work on aspects of D that interest them. Even people who ask me what to work on never follow my suggestions. They work on whatever floats their boat. It's my biggest challenge working on free software :-)
Yeah, this is characteristic of free software. If this were proprietary software like what I write at work, the PTBs would just set down items X, Y, Z as their mandate, and everyone would have to work on it, like it or not. With free software, however, if something isn't getting done, you just gotta get your hands dirty and do it yourself. Surprisingly, many times what comes out can be superior to the cruft churned out by "enterprise" programmers who were forced to write something they didn't really want to. [...]
 Note that I find gdb well nigh unusable even for C++ code, so to me an
 unusable debugger is pretty normal and I don't think much about it.
 :-) It doesn't impair my debugging sessions much.
printf debugging FTW! :-P
 I've also found that the more high level abstractions are used, the
 less useful a symbolic debugger is. Symbolic debuggers are only good
 for pedestrian, low level code that ironically is also what other
 methods are very good at, too.
[...] I don't agree with that. I think symbolic debuggers should be improved so that they *can* become useful with high level abstractions. For example, if debuggers could be made to understand templates and compile-time constants, they could become much more useful than they are today in debugging high-level code. For example, the idea of stepping through lines of code (i.e. individual statements) is a convenient simplification, but really, in modern programming languages there are multiple levels of semantics that could have a meaningful concept of "stepping forward/backward". You could step through individual expressions or subexpressions, step over function calls whose return values are passed to an enclosing function call, or step through individual arithmetic operations in a subexpression. Each of these levels of stepping could be useful in certain contexts, depending on what kind of bug you're trying to track down. Sometimes having statements as the stepping unit is too coarse-grained for certain debugging operations. Sometimes they are too fine-grained for high levels of abstractions. Ideally, there should be a way for the debugger to dissect your code into its constituent parts, at various levels of expression, for example: statement: [main.d:123] auto x = f(x,y/2,z) + z*2; ==> variable allocation: [hoisted to beginning of function] ==> evaluate expression: f(x,y/2,z) + z*2 ==> evaluate expression: f(x,y/2,z) ==> evaluate expression: x ==> load x ==> evaluate expression: y/2 ==> load y: [already in register eax] ==> load 2: [part of operation: /] ==> arithmetic operation: / ==> evaluate expression: z ==> function call: f ==> evaluate expression: z*2 ==> load z: [already in register ebx] ==> load 2: [optimized away] ==> arithmetic operation: / [optimized to z<<1] ==> evaluate sum ==> expression result: [in register edx] ==> assign expression to x ==> store x The user can choose which level of detail to zoom into, and the debugger would allow stepping through each operation at the selected level of detail (provided it hasn't been optimized away -- if it did, ideally the debugger would tell you what the optimized equivalent is). T -- Public parking: euphemism for paid parking. -- Flora
Sep 24 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/24/2014 9:43 PM, H. S. Teoh via Digitalmars-d wrote:
 printf debugging FTW! :-P
There's more than that, but yeah. Most of my types I'll write a "pretty printer" for, and use that. No conceivable debugger can guess how I want to view my data. For example, I can pretty-print an Expression as either a tree or in infix notation.
 I don't agree with that. I think symbolic debuggers should be improved
 so that they *can* become useful with high level abstractions. For
 example, if debuggers could be made to understand templates and
 compile-time constants, they could become much more useful than they are
 today in debugging high-level code.
The fact that they aren't should be telling. Like maybe it's an intractable problem :-) sort of like debugging optimized code.
Sep 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Sep 24, 2014 at 10:30:49PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/24/2014 9:43 PM, H. S. Teoh via Digitalmars-d wrote:
printf debugging FTW! :-P
There's more than that, but yeah. Most of my types I'll write a "pretty printer" for, and use that. No conceivable debugger can guess how I want to view my data. For example, I can pretty-print an Expression as either a tree or in infix notation.
gdb does allow calling your program's functions out-of-band in 'print'. I've used that before when debugging C++, which is a pain when lots of templates are involved (almost every source line is demangled into an unreadable glob of <> gibberish 15 lines long). Wrote a pretty-printing free function in my program, and used `print pretty_print(my_obj)` from gdb. Worked wonders! Having said that, though, I'm still very much a printf/writeln-debugging person. It also has the benefit of working in adverse environments like embedded devices where the runtime environment doesn't let you run gdb.
I don't agree with that. I think symbolic debuggers should be
improved so that they *can* become useful with high level
abstractions. For example, if debuggers could be made to understand
templates and compile-time constants, they could become much more
useful than they are today in debugging high-level code.
The fact that they aren't should be telling. Like maybe it's an intractable problem :-) sort of like debugging optimized code.
When all else fails, I just disassemble the code and trace it along side-by-side with the source code. Not only it's a good exercise in keeping my assembly skills sharp, you also get to see all kinds of tricks that optimizers nowadays can do in action. Code hoisting, rearranging, register assignments to eliminate subsequent loads, vectorizing, etc.. Fun stuff. Not to mention the thrill when you finally identify the cause of the segfault by successfully mapping that specific instruction to a specific construct in the source code -- not a small achievement in this day and age of optimizing compilers and pipelining, microcode CPUs! Nevertheless, I think there is still room for debuggers to improve. Recently, for example, I learned that gdb has acquired the ability to step through a program backwards. Just missed the point in your program where the problem first happened? No problem, just step backwards until you get back to that point! Neat stuff. (How this is implemented is left as an exercise for the reader. :-P) T -- Stop staring at me like that! It's offens... no, you'll hurt your eyes!
Sep 24 2014
parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 25/09/2014 07:22, H. S. Teoh via Digitalmars-d wrote:
 Nevertheless, I think there is still room for debuggers to improve.
 Recently, for example, I learned that gdb has acquired the ability to
 step through a program backwards. Just missed the point in your program
 where the problem first happened? No problem, just step backwards until
 you get back to that point! Neat stuff. (How this is implemented is left
 as an exercise for the reader. :-P)
IIRC, that was only implemented for a few architectures, and none of them where the PC architectures. (Rather it was used a lot for embedded targets architectures, for which GDB is used a lot as well) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 02 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 23:22:56 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 gdb does allow calling your program's functions out-of-band in
 'print'.
it's handy. what else handy is "quake-like" embedded console. i'm using that in my c software alot (i can connect to some port using telnet, inspect and change vars, call functions, etc). and when i decided to go with D, the first thing i tried to write was my "command console" subsystem. and it was a great pleasure with all that compile-time introspection, i must say. some annoyances here and there, but it's nothing compared to huge pile of error-prone C macro magic.
Sep 25 2014
prev sibling next sibling parent Johannes Pfau <nospam example.com> writes:
Am Wed, 24 Sep 2014 22:30:49 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 There's more than that, but yeah. Most of my types I'll write a
 "pretty printer" for, and use that. No conceivable debugger can guess
 how I want to view my data.
You can call functions with GDB, but I guess you haven't used GDB much. Breakpoint 1, D main () at main.d:5 (gdb) print a $1 = (ABCD &) 0x7ffff7ebfff0: {<Object> = {__vptr = 0x4b8c60 <main.ABCD.vtbl$>, __monitor = 0x0}, <No data fields>} (gdb) print a.toString() $2 = "Hello World!"
Sep 25 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 25/09/14 07:30, Walter Bright wrote:

 There's more than that, but yeah. Most of my types I'll write a "pretty
 printer" for, and use that. No conceivable debugger can guess how I want
 to view my data.
With LLDB you can implement your own custom formatters [1]. For example, in Xcode, Apple have added for Objective-C a formatter for NSImage. Within Xcode you can hover a variable of the type NSImage and it will show a small window with the image rendered. Perhaps it's time to look at some modern alternatives and not be stuck with GDB ;) [1] http://lldb.llvm.org/varformats.html -- /Jacob Carlborg
Sep 25 2014
parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Thursday, 25 September 2014 at 13:31:24 UTC, Jacob Carlborg 
wrote:
 Perhaps it's time to look at some modern alternatives and not 
 be stuck with GDB ;)
I might look at the "modern alternative" once it supports debugging 64-bit executables. :/ -Wyatt
Sep 25 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-25 16:01, Wyatt wrote:

 I might look at the "modern alternative" once it supports debugging
 64-bit executables. :/
LLDB supports OS X, Linux and FreeBSD. 32 and 64bit on all of these platforms [1]. Are you looking for Windows support? [1] http://lldb.llvm.org/ -- /Jacob Carlborg
Sep 25 2014
parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Thursday, 25 September 2014 at 17:44:49 UTC, Jacob Carlborg 
wrote:
 LLDB supports OS X, Linux and FreeBSD. 32 and 64bit on all of 
 these platforms [1].
It mentioned only 32-bit ELF on the "About" page. Since that matches with what was previously the case in terms of debugging support, I didn't bother to read into it further. How strange that this incredibly important detail isn't top, front, and centre.
 Are you looking for Windows support?
If I had the extreme misfortune of working on Windows, I'd just use the Visual Studio debugger. As much as I hate to admit it, it's the best-in-class. -Wyatt
Sep 26 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-09-26 14:14, Wyatt wrote:

 It mentioned only 32-bit ELF on the "About" page.
I don't know which "About" page you're reading. The one I'm reading [1] doesn't mention ELF at all. [1] http://lldb.llvm.org/index.html -- /Jacob Carlborg
Sep 26 2014
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 25.09.2014 07:30, schrieb Walter Bright:
 On 9/24/2014 9:43 PM, H. S. Teoh via Digitalmars-d wrote:
 printf debugging FTW! :-P
There's more than that, but yeah. Most of my types I'll write a "pretty printer" for, and use that. No conceivable debugger can guess how I want to view my data. For example, I can pretty-print an Expression as either a tree or in infix notation.
In Visual Studio I can define formatters for data structures. Most of the STL containers are already configured out of the box. Additionally, I don't know another IDE that matches Visual Studio debugging capabilities for multi-core and graphics programming. And C++ debugger is still catching with what .NET debugger can do for multi-core programming. -- Paulo
Sep 25 2014
prev sibling parent "Bigsandwich" <bigsandwich gmail.com> writes:
Reading this thread makes me a little sad, because all of the 
wish list stuff seems to be about features that VS already has, 
and the I use every day :(

 For example, the idea of stepping through lines of code (i.e. 
 individual
 statements) is a convenient simplification, but really, in 
 modern
 programming languages there are multiple levels of semantics 
 that could
 have a meaningful concept of "stepping forward/backward".
http://msdn.microsoft.com/en-us/library/h5e30exc%28v=vs.100%29.aspx On Thursday, 25 September 2014 at 05:30:56 UTC, Walter Bright wrote:
 On 9/24/2014 9:43 PM, H. S. Teoh via Digitalmars-d wrote:
 printf debugging FTW! :-P
There's more than that, but yeah. Most of my types I'll write a "pretty printer" for, and use that. No conceivable debugger can guess how I want to view my data. For example, I can pretty-print an Expression as either a tree or in infix notation.
autoexp.dat can do this in the debugger: http://msdn.microsoft.com/en-us/library/zf0e8s14.aspx
 I don't agree with that. I think symbolic debuggers should be 
 improved
 so that they *can* become useful with high level abstractions. 
 For
 example, if debuggers could be made to understand templates and
 compile-time constants, they could become much more useful 
 than they are
 today in debugging high-level code.
The fact that they aren't should be telling. Like maybe it's an intractable problem :-) sort of like debugging optimized code.
http://randomascii.wordpress.com/2013/09/11/debugging-optimized-codenew-in-visual-studio-2012/
Sep 26 2014
prev sibling next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Thursday, 25 September 2014 at 03:55:22 UTC, Walter Bright 
wrote:
 No prob. The initiating post was an invitation to a wine 
 festival, and that's what we have :-)
:D
Sep 25 2014
prev sibling next sibling parent reply "Joakim" <dlang joakim.fea.st> writes:
On Thursday, 25 September 2014 at 03:55:22 UTC, Walter Bright
wrote:
 I make similar statements all the time. It doesn't result in 
 action on anyone's part. I don't tell people what to do - they 
 work on aspects of D that interest them.

 Even people who ask me what to work on never follow my 
 suggestions. They work on whatever floats their boat. It's my 
 biggest challenge working on free software :-)
One recurring theme appears to be people waiting on you to decide on whether an idea is agreeable in principle before implementing it. It would help if you put up a list on the wiki of such features, either that you want or that you agree with others would be good to have, ie some sort of feature wish list that's pre-approved by you and Andrei. D enthusiasts shouldn't have to wade through a bunch of forum postings and reddit comments to find out that C++ and gc are the current priorities. When this was pointed out to Andrei, he asked that someone else put it on the wiki, and I see that someone just added it to the agenda for 2.067. I'm sorry but it's ridiculous for you two co-BDFLs not to put these new priorities or pre-approved features (perhaps even a list of features you'd automatically reject) in a list on the wiki and maintain it yourselves. It's the least you can do considering the veto power you have. Of course, such lists don't imply a hierarchy where you tell everyone what to do, as the D community is always free to ignore your agenda. ;) But it would enable clear and concise communication, which is lacking now, and I suspect many would take your approval as a signal for what to work on next. For a concrete example, some in this thread have mentioned cleaning up the language, presumably enabled by a dfix tool so users can have their source automatically updated. But even Jacob seemed to think you two were against dfix, in the quote below: On Wednesday, 24 September 2014 at 06:16:07 UTC, Jacob Carlborg wrote:
 On 23/09/14 20:32, David Nadlinger wrote:

 Seriously, once somebody comes up with an automatic fixup 
 tool, there is
 hardly any generic argument left against language changes.
Brain has already said that such a tool is fairly easy to create in many cases. Also that he is willing do to so if it will be used. But so far neither Andrei or Walter have shown any signs of willing to break code that can be fixed with a tool like this. I can understand that Brian doesn't want to create such a tool if it's not going to be used.
Andrei has voiced some support for a tool like this, though in exactly what context is unclear. I believe you once said it'd be unworkable, then when Brian showed you an example using libdparse, you took that back but did not say you wanted to start cleaning up D using such a dfix tool. Go has been doing this for years using gofix and many of us have expressed a desire for such language cleanup using automated source conversion: http://blog.golang.org/introducing-gofix I can understand why nobody builds a dfix tool if it's not going to be used officially to clean up the language. If you said you'd go for it, I imagine Brian would build such a tool and find stuff to fix. Of course, sometimes people look to you two too much to decide. I don't see why Vladimir can't build the autotester setup he wants for third-party D projects and continuously run any third-party projects he wants (all of code.dlang.org?) against dmd/druntime/phobos from git for himself. If others like it, they will start using it for their own projects. The only reason to integrate it with the dmd autotester is to notify authors of their pull requests that broke third-party code, but that can wait till such a third-party autotester has been proven for the third-party authors first. To sum up, you can't tell people what to do in the D community, but you can provide more clear direction on what the priorities are and which future paths have the green light, especially since you and Andrei are the gatekeepers for what gets into dmd.
Sep 25 2014
next sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Thursday, 25 September 2014 at 11:30:52 UTC, Joakim wrote:
 On Thursday, 25 September 2014 at 03:55:22 UTC, Walter Bright
 wrote:
 I make similar statements all the time. It doesn't result in 
 action on anyone's part. I don't tell people what to do - they 
 work on aspects of D that interest them.

 Even people who ask me what to work on never follow my 
 suggestions. They work on whatever floats their boat. It's my 
 biggest challenge working on free software :-)
One recurring theme appears to be people waiting on you to decide on whether an idea is agreeable in principle before implementing it. It would help if you put up a list on the wiki of such features, either that you want or that you agree with others would be good to have, ie some sort of feature wish list that's pre-approved by you and Andrei.
There is at least this bugzilla tag: https://issues.dlang.org/buglist.cgi?keywords=preapproved&list_id=110116 ...But its an awful short list! Clear direction has been a common request for a while. In the vein of what you're talking about, though, this seems a good place to leverage our existing infrastructure with Bugzilla. Open tracker bugs that cover these areas and add any issues related to them as dependencies. Looks something like this: https://bugs.gentoo.org/show_bug.cgi?id=484436 -Wyatt
Sep 25 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 5:41 AM, Wyatt wrote:
 On Thursday, 25 September 2014 at 11:30:52 UTC, Joakim wrote:
 On Thursday, 25 September 2014 at 03:55:22 UTC, Walter Bright
 wrote:
 I make similar statements all the time. It doesn't result in action
 on anyone's part. I don't tell people what to do - they work on
 aspects of D that interest them.

 Even people who ask me what to work on never follow my suggestions.
 They work on whatever floats their boat. It's my biggest challenge
 working on free software :-)
One recurring theme appears to be people waiting on you to decide on whether an idea is agreeable in principle before implementing it. It would help if you put up a list on the wiki of such features, either that you want or that you agree with others would be good to have, ie some sort of feature wish list that's pre-approved by you and Andrei.
There is at least this bugzilla tag: https://issues.dlang.org/buglist.cgi?keywords=preapproved&list_id=110116 ....But its an awful short list! Clear direction has been a common request for a while.
I'll make a pass, also please post here specific issues you need to be looked at. Andrei
Sep 25 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/25/14, 4:30 AM, Joakim wrote:
 I'm sorry but it's ridiculous for you two co-BDFLs not to put
 these new priorities or pre-approved features (perhaps even a
 list of features you'd automatically reject) in a list on the
 wiki and maintain it yourselves.  It's the least you can do
 considering the veto power you have.
That's sensible. We have the "preapproved" tag at http://issues.dlang.org exactly for that kind of stuff. (I should note, however, that sometimes it backfires - I've added https://issues.dlang.org/show_bug.cgi?id=13517 with preapproved knowing it's sensible and entirely noncontroversial and got unexpected pushback for it.) Andrei
Sep 25 2014
parent reply "Joakim" <dlang joakim.fea.st> writes:
On Thursday, 25 September 2014 at 13:56:20 UTC, Andrei 
Alexandrescu wrote:
 On 9/25/14, 4:30 AM, Joakim wrote:
 I'm sorry but it's ridiculous for you two co-BDFLs not to put
 these new priorities or pre-approved features (perhaps even a
 list of features you'd automatically reject) in a list on the
 wiki and maintain it yourselves.  It's the least you can do
 considering the veto power you have.
That's sensible. We have the "preapproved" tag at http://issues.dlang.org exactly for that kind of stuff. (I should note, however, that sometimes it backfires - I've added https://issues.dlang.org/show_bug.cgi?id=13517 with preapproved knowing it's sensible and entirely noncontroversial and got unexpected pushback for it.)
That's not enough. While it's nice that a "preapproved" tag is being used on bugzilla, most of those issues are too low-level and an obscure bugzilla tag hardly fits the bill, particularly when most D users have never seen the D bugzilla let alone use it. It needs to be a page on the wiki or the main site, which you or any user can link to anytime people want to know the plan. I gave a specific example with dfix, yet to get an answer on that. Brian may have marked his DIP 65 as rejected a couple months back, but that still doesn't answer the broader question of using a dfix tool for other cleanup. You have talked about making D development more professional. It's not very professional not to have some sort of public plan of where you want the language to go. All I'm asking for is a public list of preapproved and maybe rejected features that the two of you maintain. Dfix might be on the preapproved list, ARC might be on the rejected. ;) You could also outline broad priorities like C++ support or GC improvement on such a webpage.
Sep 26 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Sep 26, 2014 at 10:22:49AM +0000, Joakim via Digitalmars-d wrote:
 On Thursday, 25 September 2014 at 13:56:20 UTC, Andrei Alexandrescu wrote:
On 9/25/14, 4:30 AM, Joakim wrote:
I'm sorry but it's ridiculous for you two co-BDFLs not to put
these new priorities or pre-approved features (perhaps even a
list of features you'd automatically reject) in a list on the
wiki and maintain it yourselves.  It's the least you can do
considering the veto power you have.
That's sensible. We have the "preapproved" tag at http://issues.dlang.org exactly for that kind of stuff. (I should note, however, that sometimes it backfires - I've added https://issues.dlang.org/show_bug.cgi?id=13517 with preapproved knowing it's sensible and entirely noncontroversial and got unexpected pushback for it.)
That's not enough. While it's nice that a "preapproved" tag is being used on bugzilla, most of those issues are too low-level and an obscure bugzilla tag hardly fits the bill, particularly when most D users have never seen the D bugzilla let alone use it. It needs to be a page on the wiki or the main site, which you or any user can link to anytime people want to know the plan.
I'm thinking either a wiki page, or a dedicated webpage on dlang.org, that contains a link to a prebaked bugzilla query that returns all preapproved issues. Or perhaps just add that to http://dlang.org/bugstats.php .
 I gave a specific example with dfix, yet to get an answer on that.
 Brian may have marked his DIP 65 as rejected a couple months back, but
 that still doesn't answer the broader question of using a dfix tool
 for other cleanup.
[...] I'm thinking we, the community, should just go ahead with writing a dfix utility, promote it, and use it. Once it becomes a de facto standard, it will be much easier to convince the BDFLs to "officially" adopt it. ;-) T -- Give a man a fish, and he eats once. Teach a man to fish, and he will sit forever.
Sep 26 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 25/09/2014 04:55, Walter Bright wrote:
 I've also found that the more high level abstractions are used, the less
 useful a symbolic debugger is. Symbolic debuggers are only good for
 pedestrian, low level code that ironically is also what other methods
 are very good at, too.
Err... are you talking in the context of D, or programming in general? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 02 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 02 Oct 2014 13:18:43 +0100
Bruno Medeiros via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Err... are you talking in the context of D, or programming in general?
i'm not using interactive debuggers for 10+ years. the only use of GDB for me is doing post-mortem inspection. i found that logging and integrated control console lets me debug my code faster than any interactive debug session can. besides, i found that i spent alot of time doing "next, next, next, run-to-bp, next, next" in interactive debugger instead of sitting down and think a little. ;-)
Oct 02 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 2 October 2014 at 12:59:37 UTC, ketmar via 
Digitalmars-d wrote:
 i'm not using interactive debuggers for 10+ years. the only use 
 of GDB
 for me is doing post-mortem inspection. i found that logging and
 integrated control console lets me debug my code faster than any
 interactive debug session can.
CLI based debugging in gdb is painful. Whether logging or using a debugger is faster really depends IMO. I think regular tracing/logging is easier when you debug recursive stuff like parser or regexp engines, but a debugger is easier when you have NULL dereferencing issues, when you are getting wrong values in complex calculations or when loops don't terminate when they are supposed to.
Oct 02 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 02 Oct 2014 15:11:37 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 CLI based debugging in gdb is painful.
i used alot of frontends too. martian ddd, then kde's frontend, then tried cgdb. and some other i can't even remember. it's not about bad interface. ;-)
 but a debugger is easier when you have NULL dereferencing issues
this is segfault and coredump. and then... yes, i'm loading that coredump in gdb and doint bt and various prints if necessary.
 when you are getting wrong values in complex calculations
logging can help here too. if we have valid input and garbage output... well, we know where bug is. plus validation of all data, asserts and so on. that's why i love D with it's static asserts, in, out, and invariants. and integrated unittests. sure, i'm not a guru and just talking about personal expirience here. i found myself much more productive after throwing interacteve debuggers out of the window. ;-)
Oct 02 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 2 October 2014 at 15:29:17 UTC, ketmar via 
Digitalmars-d wrote:
 sure, i'm not a guru and just talking about personal expirience 
 here. i
 found myself much more productive after throwing interacteve 
 debuggers
 out of the window. ;-)
Debugging preferences are very personal, I would think so. :-) The first time I used an interactive source level debugger was in the late 80s using Turbo Pascal, I thought it was magically great, like a revelation! My reference was writing assembly and using a monitor/disassembler to debug… Source level debugging was quite a step up! But I agree that debuggers can be annoying, reconfiguring them is often more troublesome than just adding some printf() hacks. I find them indispensable when I am really stuck though: "duh, I have spent 15 minutes on this, time to fire up the debugger". In environments like javascript/python I often can find the problem just as fast by just using the interactive console if the code is written in a functional style. The more global state you have and the less functional your style is, the more useful a debugger becomes IMO.
Oct 02 2014
next sibling parent reply "po" <yes no.com> writes:
 But I agree that debuggers can be annoying, reconfiguring them 
 is often more troublesome than just adding some printf() hacks. 
 I find them indispensable when I am really stuck though: "duh, 
 I have spent 15 minutes on this, time to fire up the debugger".
Hehhe reconfigure a debugger. Lolz. Poor linux people. Living it up like its still 1980.
Oct 02 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 02 Oct 2014 16:22:38 +0000
po via Digitalmars-d <digitalmars-d puremagic.com> wrote:

   Hehhe reconfigure a debugger. Lolz. Poor linux people. Living it=20
 up like its still 1980.
did you updated your antivirus today?
Oct 02 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 2 October 2014 at 16:22:39 UTC, po wrote:
  Hehhe reconfigure a debugger. Lolz. Poor linux people. Living 
 it up like its still 1980.
:-) Well, gdb is not my favourite, but it works well on C code. I think this holds for all kinds of debuggers (including VS and XCode)… In all debuggers you have to configure which expressions to watch, how to format them if they are complex and where the breakpoints go. Printf logging with conditionals, asserts and recursive test-functions can often be just as easy as a debugger, if not easier, when working on complex algorithms/messy state machines/deep data structures/data with custom indirection (using ints, rather than pointers).
Oct 02 2014
prev sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 02 Oct 2014 16:13:31 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 In environments like javascript/python I often can find the=20
 problem just as fast by just using the interactive console if the=20
 code is written in a functional style.
actually, i'm cheating here, 'cause most of my reasonably complex software has well-hidden interactive console inside, so i can connect to it to inspect the internal state and execute some (sometimes alot ;-) of internal commands. and i must say that integrating such console in C projects was tiresome. with D i can do it almost automatically, skipping annoying "variable registration" and wrappers for functions. the first thing i wrote in D was "console module" -- just to test if all that "CTFE, and traits, and metaprogramming" promises are real. and then i became immediately hooked to D. ;-)
Oct 02 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 2 October 2014 at 16:26:15 UTC, ketmar via 
Digitalmars-d wrote:
 and i must say that integrating such
 console in C projects was tiresome. with D i can do it almost
 automatically, skipping annoying "variable registration" and 
 wrappers for functions.

 the first thing i wrote in D was "console module" -- just to 
 test if
 all that "CTFE, and traits, and metaprogramming" promises are 
 real. and
 then i became immediately hooked to D. ;-)
That's pretty cool, so you basically use the reflection capabilities of D to "generate" your own custom CLI to the application? I had not thought of that before. Interesting idea!
Oct 02 2014
next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 02 Oct 2014 16:45:21 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 That's pretty cool, so you basically use the reflection=20
 capabilities of D to "generate" your own custom CLI to the=20
 application?
yes. some naming conventions and one mixin -- and all interesting variables and functions from the given module are automatically registered in command console. so i can inspect and change variables and fields, call free functions and class/struct member functions, even write simple scripts. it's really handy. and i connect to this console using telnet (if telnet support is activated). this also allows some forms of unified "automated testing" even for GUI apps, all without building special "debug versions", i.e. on production code.
Oct 02 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 2 October 2014 at 17:02:35 UTC, ketmar via 
Digitalmars-d wrote:
 yes. some naming conventions and one mixin -- and all 
 interesting
 variables and functions from the given module are automatically
 registered in command console. so i can inspect and change 
 variables
 and fields, call free functions and class/struct member 
 functions, even
 write simple scripts. it's really handy. and i connect to this 
 console
 using telnet (if telnet support is activated).
Sounds like a great topic for a blog post! If this is easy to do then this is very useful when writing game servers or servers that retain state in general. Please share when you are ready, I am gonna leach off yer code… ;-)
Oct 03 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 03 Oct 2014 08:30:10 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Please share when you are ready, I am gonna leach off yer code=E2=80=A6=20
 ;-)
sure, i'm planning to open it as PD/WTFPL as soon as i'll settle some issues. there is nothing really spectacular here, i must say, just tedious wrapper generators and parsers. yet it can save some time for other people. ;-) for now you can take the ancient version and use it as a base for your own: http://repo.or.cz/w/iv.d.git/blob_plain/HEAD:/cmdcon.d (don't mind the license, consider is PD/WTFPL) it supports UDA annotations for getting help ('cmd ?'), variables and free functions. and please don't blame me, that was the first 'serious' code i did in D, learning D as i writting the module. ;-)
Oct 03 2014
prev sibling next sibling parent =?windows-1252?Q?Martin_Dra=9Aar?= via Digitalmars-d writes:
Dne 2.10.2014 v 19:02 ketmar via Digitalmars-d napsal(a):
 On Thu, 02 Oct 2014 16:45:21 +0000
 via Digitalmars-d <digitalmars-d puremagic.com> wrote:
=20
 That's pretty cool, so you basically use the reflection=20
 capabilities of D to "generate" your own custom CLI to the=20
 application?
yes. some naming conventions and one mixin -- and all interesting variables and functions from the given module are automatically registered in command console. so i can inspect and change variables and fields, call free functions and class/struct member functions, even=
 write simple scripts. it's really handy. and i connect to this console
 using telnet (if telnet support is activated).
=20
 this also allows some forms of unified "automated testing" even for GUI=
 apps, all without building special "debug versions", i.e. on production=
 code.
=20
That is mighty interesting. Would you be willing to share some code? Right now, I am teaching my students about debugging in C/C++ and I want to give them some perspective from other languages and environments. Thanks. Martin
Oct 03 2014
prev sibling next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 03 Oct 2014 10:00:07 +0200
Martin Dra=C5=A1ar via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 That is mighty interesting. Would you be willing to share some code?
alas, only very old and rudimentary module is available. basically, it's the core of the full-featured console, but... only the core, and not very well written. i'm planning to opensource fully working thingy with bells and whistles eventually, but can't do it right now. :-( anyway, here it is: http://repo.or.cz/w/iv.d.git/blob_plain/HEAD:/cmdcon.d please note that this is not very well tested. i'm keeping it just for nostalgic reasons. ah, and you can ignore the license. consider that code as public domain/WTFPL. there is no struct/class support there, only variables and free functions. yet free functions supports various types of arguments and understands default agrument values.
Oct 03 2014
parent Laeeth Isharc <spamnolaeeth nospamlaeeth.com> writes:
On Friday, 3 October 2014 at 14:42:23 UTC, ketmar wrote:
 anyway, here it is: 
 http://repo.or.cz/w/iv.d.git/blob_plain/HEAD:/cmdcon.d

 please note that this is not very well tested. i'm keeping it 
 just for nostalgic reasons.

 ah, and you can ignore the license. consider that code as 
 public domain/WTFPL.

 there is no struct/class support there, only variables and free 
 functions. yet free functions supports various types of 
 arguments and understands default agrument values.
Hi Ketmar. I hope you're well. Would you be able to share this again as not in repo now ? I think I moved computer and lost the previous download. No problem if you prefer not to, but I very much liked your idea and would like to take a quick look again to see what the main things to think about are (will write from scratch though). Thanks. Laeeth.
Mar 17 2016
prev sibling next sibling parent =?UTF-8?B?TWFydGluIERyYcWhYXI=?= via Digitalmars-d writes:
Dne 3.10.2014 v 16:42 ketmar via Digitalmars-d napsal(a):
 alas, only very old and rudimentary module is available. basically,
 it's the core of the full-featured console, but... only the core, and
 not very well written. i'm planning to opensource fully working thingy
 with bells and whistles eventually, but can't do it right now. :-(
=20
 anyway, here it is:
 http://repo.or.cz/w/iv.d.git/blob_plain/HEAD:/cmdcon.d
=20
 please note that this is not very well tested. i'm keeping it just for
 nostalgic reasons.
=20
 ah, and you can ignore the license. consider that code as public
 domain/WTFPL.
=20
 there is no struct/class support there, only variables and free
 functions. yet free functions supports various types of arguments and
 understands default agrument values.
Thanks a lot. I just briefly gone through the code and I have a question about the console use: That code is able to register functions and variables with some annotations and execute them by string commands. How do you generally use it for debugging purposes? That is, how do you use this console interactively? In your previous mail you wrote that you use a telnet. Do you have another mixin that at some specific point inserts a code that pauses the execution of surrounding code and starts listening for telnet commands? Also, how do you make your console work in multithreaded programs? Thanks, Martin
Oct 06 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 06 Oct 2014 10:22:01 +0200
Martin Dra=C5=A1ar via Digitalmars-d <digitalmars-d puremagic.com> wrote:

as i said this is the very first version of cmdcon, not the one i'm
using now. i'm not able to publish the current version yet.

 That is, how do you use this console interactively?
 In your previous mail you wrote that you use a telnet.
 Do you have another mixin that at some specific point inserts a code
 that pauses the execution of surrounding code and starts listening
 for telnet commands?
it depends of event loop, actually. i have my own event loop code, and there is hook for telnet channel. that hook collects line to execute, calls cmdcon executor and outputs cmdcon output. maybe i'll publish some std.socket-based sample later.
 Also, how do you make your console work in multithreaded programs?
you should register only shared and global vars (and i'm really missing __trait(isGShared) here!), there is no sense to register thread locals in multithreaded program. but you can register free functions and execute 'em. that free functions should take care of threading. really, i have to revisit that code and write some samples. i'll try to do that soon. and maybe i'll update public versino of cmdcon a little too. ;-)
Oct 06 2014
prev sibling next sibling parent =?UTF-8?B?TWFydGluIERyYcWhYXI=?= via Digitalmars-d writes:
Dne 6.10.2014 v 12:15 ketmar via Digitalmars-d napsal(a):
 On Mon, 06 Oct 2014 10:22:01 +0200
 Martin Dra=C5=A1ar via Digitalmars-d <digitalmars-d puremagic.com> wrot=
e:
=20
 as i said this is the very first version of cmdcon, not the one i'm
 using now. i'm not able to publish the current version yet.
=20
 That is, how do you use this console interactively?
 In your previous mail you wrote that you use a telnet.
 Do you have another mixin that at some specific point inserts a code
 that pauses the execution of surrounding code and starts listening
 for telnet commands?
it depends of event loop, actually. i have my own event loop code, and there is hook for telnet channel. that hook collects line to execute, calls cmdcon executor and outputs cmdcon output. maybe i'll publish some std.socket-based sample later. =20
 Also, how do you make your console work in multithreaded programs?
you should register only shared and global vars (and i'm really missing=
 __trait(isGShared) here!), there is no sense to register thread locals
 in multithreaded program.
=20
 but you can register free functions and execute 'em. that free
 functions should take care of threading.
=20
 really, i have to revisit that code and write some samples. i'll try to=
 do that soon. and maybe i'll update public versino of cmdcon a little
 too. ;-)
=20
Ok, thanks for your answers. If you get your code to publishable state, I am sure a lot of people will be interested. Cheers, Martin
Oct 06 2014
prev sibling next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 06 Oct 2014 13:34:23 +0200
Martin Dra=C5=A1ar via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Ok, thanks for your answers. If you get your code to publishable
 state, I am sure a lot of people will be interested.
as i can't publish my current version of cmdcon, i decided to write another one from scratch. it contains alot less of mixin() codegens, can do everything cmdcon.d can do and has structs/classes already working. this is the excerpt from testing code: import cong; // our console module ConName("intVar00") ConHelp("simple int variable") ConDescription("this is just an int variable.\nordinary one, nothing special.") __gshared int v0 =3D 42; __gshared string v1 =3D "hi!"; class A { int i =3D 42; string s =3D "hello"; this () {} this (string v) { s =3D v; } int bar () { writeln("BAR: s=3D", s); return 666; } void foo (float[] ff) {} } __gshared A a; // yes, this can be null void foo (int n, string s=3D"hi", double d=3D0.666) { writefln("n=3D%s; s=3D[%s]; d=3D%s", n, s, d); } struct S { int i =3D 666; string s =3D "bye"; int sbar () { writeln("SBAR: s=3D", s); return 666; } void sfoo (float[] ff) {} } __gshared S s; S s1; void main (string[] args) { writeln("known commands:"); foreach (auto n; conGetKnownNames()) writeln(" ", n); writeln("---"); a =3D new A(); conExecute("a bar"); // virtual method call for object instance a.s =3D "first change"; conExecute("a bar"); a =3D new A("second change"); conExecute("a bar"); // virtual method call for ANOTHER object instance foreach (auto arg; args[1..$]) { try { conExecute(arg); } catch (Exception e) { writeln("***ERROR: ", e.msg); } } } mixin(ConRegisterAll); // register 'em all! =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D known commands: intVar00 a foo v1 s --- (autoregistration rocks!) some sample commands and results: intVar00 42 (Int) n=3D42; s=3D[hi]; d=3D0.666 a.i 42 (Int) BAR: s=3Dsecond change SBAR: s=3Dbye by default `mixin(ConRegisterAll);` tries to register all suitable public symbols. one can use ConIgnore UDA to ignore symbol and some other UDAs to change name, add help and so on. alas, there is no way to get Ddoc info in compile-time (what a pity!). you can't register pointer objects (i.e. `__gshared int *a` will not pass). it's doable (class fields are such objects internally), but i just don't need this. i also found some bugs in compilers (both dmd and gdc), but aren't ready to dustmite and report 'em yet. GDC just segfaults now, but DMD works. there is also bug in CDGC, which took me a whole day of useless code motion (i forgot that i'm using DMD built with CDGC ;-). so the base mechanics is in place, i need to debug some things and write simple sample with eventloop and telnet. new code includes only WTFPL'ed parts, so it probably will be WTFPLed too. hope to finish this until weekend.
Oct 13 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 13 October 2014 at 21:09:29 UTC, ketmar via 
Digitalmars-d wrote:
 as i can't publish my current version of cmdcon, i decided to 
 write another one from scratch.
Cool, looks like a fun module to play with! :-)
Oct 13 2014
prev sibling next sibling parent =?UTF-8?B?TWFydGluIERyYcWhYXI=?= via Digitalmars-d writes:
Dne 13.10.2014 v 23:09 ketmar via Digitalmars-d napsal(a):
 as i can't publish my current version of cmdcon, i decided to write
 another one from scratch. it contains alot less of mixin() codegens,
 can do everything cmdcon.d can do and has structs/classes already
 working. this is the excerpt from testing code:
Nice! That's what I call a can-do attitude :) I just happen to have a nice place in code where it can get used. I'm looking forward for playing with it. Martin
Oct 13 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
as i see that some peopele are eager to play with cmdcon-ng, i setup a
git repo with it: http://repo.or.cz/w/cong.d.git

lay your hands while it hot!

the code is little messy (there are some napoleonian plans which aren't
made into it), but it should work. at least with unittests and sample.

enjoy, and happy hacking!
Oct 13 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 14 Oct 2014 00:09:17 +0300
ketmar via Digitalmars-d <digitalmars-d puremagic.com> wrote:

let's stop hijacking this thread. here is The Official Thread for
cmdcon-ng:
http://forum.dlang.org/thread/mailman.772.1413240502.9932.digitalmars-d pur=
emagic.com
Oct 13 2014
prev sibling parent reply "user" <user xxx.com> writes:
On Wednesday, 24 September 2014 at 06:28:21 UTC, Manu via
Digitalmars-d wrote:
 On 20 September 2014 22:39, Tofu Ninja via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 There was a recent video[1] by Jonathan Blow about what he 
 would want in a
 programming language designed specifically for game 
 development. Go, Rust,
 and D were mentioned and his reason for not wanting to use D 
 is is that it
 is "too much like C++" although he does not really go into it 
 much and it
 was a very small part of the video it still brings up some 
 questions.

 What I am curious is what are the worst parts of D? What sort 
 of things
 would be done differently if we could start over or if we were 
 designing a
 D3? I am not asking to try and bash D but because it is 
 helpful to know
 what's bad as well as good.

 I will start off...
 GC by default is a big sore point that everyone brings up
 "is" expressions are pretty wonky
 Libraries could definitely be split up better

 What do you think are the worst parts of D?

 [1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
Personally, after years of use, my focus on things that really annoy me has shifted away from problems with the language, and firmly towards basic practicality and productivity concerns. I'm for addressing things that bother the hell out of me every single day. I should by all reason be more productive in D, but after 6 years of experience, I find I definitely remain less productive, thanks mostly to tooling and infrastructure. 1. Constant rejection of improvements because "OMG breaking change!". Meanwhile, D has been breaking my code on practically every release for years. I don't get this, reject changes that are deliberately breaking changes which would make significant improvements, but allow breaking changes anyway because they are bug fixes? If the release breaks code, then accept that fact and make some real proper breaking changes that make D substantially better! It is my opinion that D adopters don't adopt D because it's perfect just how it is and they don't want it to improve with time, they adopt D *because they want it to improve with time*! That implies an acceptance (even a welcoming) of breaking changes. 2. Tooling is still insufficient. I use Visual Studio, and while VisualD is good, it's not great. Like almost all tooling projects, there is only one contributor, and I think this trend presents huge friction to adoption. Tooling is always factored outside of the D community and their perceived realm of responsibility. I'd like to see tooling taken into the core community and issues/bugs treated just as seriously as issues in the compiler/language itself. 3. Debugging is barely ever considered important. I'd love to see a concerted focus on making the debug experience excellent. Iain had a go at GDB, I understand there is great improvement there. Sadly, we recently lost the developer of Mago (a Windows debugger). There's lots of work we could do here, and I think it's of gigantic impact. 4. 'ref' drives me absolutely insane. It seems so trivial, but 6 years later, I still can't pass an rvalue->ref (been discussed endlessly), create a ref local, and the separation from the type system makes it a nightmare in generic code. This was a nuisance for me on day-1, and has been grinding me down endlessly for years. It has now far eclipsed my grudges with the GC/RC, or literally anything else about the language on account of frequency of occurrence; almost daily.
i couldn't agree more. i would like to add, that coming from D1's clean and nice syntax - D2 becomes a syntactic monster, that is highly irritating and hard to get used to. its littered with like a scripting language. that really sucks!
Sep 24 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 24 Sep 2014 08:53:50 +0000
user via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 its littered with  
 like a scripting language. that really sucks!
do you like the fact that you can't have variable named "body"? do you want to have more such forbidden names?
Sep 24 2014
prev sibling parent "eles" <eles215 gzk.dot> writes:
On Wednesday, 24 September 2014 at 08:53:51 UTC, user wrote:
 On Wednesday, 24 September 2014 at 06:28:21 UTC, Manu via
 Digitalmars-d wrote:
 On 20 September 2014 22:39, Tofu Ninja via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 i couldn't agree more. i would like to add, that coming from 
 D1's
 clean and nice syntax - D2 becomes a syntactic monster
You are a bit right, it is about on the edge and dragging all those undecisions with it only makes things worse. People usually say that complexity is not that bad, because you only use what you know (but what about maintenance?). Still, people criticize C++ for being too complex. http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG 80 percents of C++ programmers know only 20% of the language. However, not *the same* 20% percents...
Sep 24 2014
prev sibling next sibling parent reply "Iain Buclaw" <ibuclaw gdcproject.org> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
Anything in the spec that depends on you having an x86 CPU, or being tied to a specific platform. 1) D Inline Assembler. - First because suddenly to have a conformant compiler you need to implement an entire assembler, rather than doing the sensible thing and just offloading it to GAS or some other tool that can assemble the code for you. - Second because it creates so many holes, like invalidates nothrow checks, allows jumps to skip over initialisations, etc. 2) __simd - The gentlemans equivalent of asm { di 0xF30F58, v1, v2; } 3) va_argsave_t implementation - The brainchild of "let's get it working, then pretty" when it came to porting DMD to 64bit. Everyone else managed to support 64bit va_list just fine for years without it. 4) Array operations only work if evaluation order is done from right to left. - Contradictory to the rest of the language which promotes left to right evaluation. 5) CTFE'd Intrinsics - First, there's a disparity between what is a compiler intrinsic and what is a ctfe intrinsic. Eg: tan() - Second, there is no use-case for being able to run at compile-time what is essentially a specialist x87 opcode. 6) Shared library support - Still no documentation after 2 or more years of requesting it. 7) Interfacing with C++ - A new set of features that is danger of falling into the same "let's get it working" pit. First warning sign was C++ template mangling, then DMD gave up on mangling D 'long' in any predictable way. It's all been downhill from there.
Sep 25 2014
next sibling parent "Iain Buclaw" <ibuclaw gdcproject.org> writes:
On Thursday, 25 September 2014 at 07:34:11 UTC, Iain Buclaw wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:
 What do you think are the worst parts of D?
Anything in the spec that depends on you having an x86 CPU, or being tied to a specific platform.
Special extra 8) Real - What a pain.
Sep 25 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Iain Buclaw"  wrote in message news:dqgkcmdmxekzqpvfbcim forum.dlang.org...

On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:
 What do you think are the worst parts of D?
 1) D Inline Assembler.
Relying on the system assembler sucks too.
 7) Interfacing with C++
 - A new set of features that is danger of falling into the same "let's get 
 it working" pit.  First warning sign was C++ template mangling, then DMD 
 gave up on mangling D 'long' in any predictable way.  It's all been 
 downhill from there.
C++ template mangling is fine. 'long' mangling is messy, but it's better than what was there before.
 8) Real
 - What a pain.
Oh yeah. D's one variable-sized type.
Sep 25 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/25/2014 9:26 PM, Daniel Murphy wrote:
 Oh yeah.  D's one variable-sized type.
Pointers too!
Sep 26 2014
prev sibling next sibling parent Charles Hixson via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 09/20/2014 05:39 AM, Tofu Ninja via Digitalmars-d wrote:
 There was a recent video[1] by Jonathan Blow about what he would want 
 in a programming language designed specifically for game development. 
 Go, Rust, and D were mentioned and his reason for not wanting to use D 
 is is that it is "too much like C++" although he does not really go 
 into it much and it was a very small part of the video it still brings 
 up some questions.

 What I am curious is what are the worst parts of D? What sort of 
 things would be done differently if we could start over or if we were 
 designing a D3? I am not asking to try and bash D but because it is 
 helpful to know what's bad as well as good.

 I will start off...
 GC by default is a big sore point that everyone brings up
 "is" expressions are pretty wonky
 Libraries could definitely be split up better

 What do you think are the worst parts of D?

 [1] https://www.youtube.com/watch?v=TH9VCN6UkyQ
The worst part of D is the limited libraries. This often causes me to choose Python instead, I'm sure it often causes others to choose Java or C++ or ... Mind you, many of the libraries "sort of" exist, but they don't work well. This is a pity, because if there were, say, a decent wrapper for SQLite then there would be many more uses. (Yes, I know that the C interface code is included...that's why I picked that particular example.) OTOH, it's not clear how to solve this, outside of convincing more people to spend time wrapping libraries. But I'm not the right person, because my prior attempts have ended up being half-hearted failure....also I don't really like the D template syntax. (For that matter I'm dubious about the entire "template" approach, though many people clearly find it reasonable.)
Oct 04 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
I am in the mood to complain today so this feels like a good 
moment to post a bit more extended reply here.

There are three big issues that harm D development most in my 
opinion:

1) lack of "vision"

TDPL was an absolutely awesome book because it expained "why?" as 
opposed to "how?". Such insight into language authors rationale 
is incredibly helpful for long-term contribution. Unfortunately, 
it didn't cover all parts of the language and many new things has 
been added since it was out.

Right now I have no idea where the development is headed and what 
to expect from next few releases. I am not speaking about 
wiki.dlang.org/Agenda but about bigger picture. Unexpected focus 
on C++ support, thread about killing auto-decoding, recent ref 
counting proposal - all this stuff comes from language authors 
but does not feel like a strategic additions. It feels like yet 
another random contribution, no different from contribution/idea 
of any other D user.

Anarchy-driven development is pretty cool thing in general but 
only if there is a base, a long-term vision all other 
contributions are built upon. And I think it is primary 
responsibility of language authors to define such as clear as 
possible. It is very difficult task but it simply can't be 
delegated.

2) reliable release base

I think this is most important part of open-source infrastructure 
needed to attract more contributions and something that also 
belongs to the "core team". I understand why Walter was so eager 
to delegate is but right now the truth is that once Andrew has to 
temporarily leave all release process has immediately stalled. 
And finding replacement is not easy - this task is inherently 
ungrateful as it implies spending time and resources on stuff you 
personally don't need at all.

Same applies to versioning - it got considerably better with 
introduction of minor version but it is still far from reasonable 
SemVer and cherry-picking approach still feels like madness.

And current situation where dissapearance of one person has 
completely blocked and releases simply tells everyone "D is still 
terribly immature".

3) lack of field testing

Too many new features get added simply because they look 
theoretically sound. I think it is quite telling that most robust 
parts of D are the ones that got designed based on mistake 
experience of other languages (primarily C++) and most 
innovations tend to fall into collection of hacks stockpiled 
together (something same C++ is infamous for).

I am disturbed when Andrei comes with proposal that possibly 
affects whole damn Phobos (memeory management flags) and asks to 
trust his experience and authority on topic while rejecting 
patterns that are confirmed to be working well in real production 
projects. Don't get me wrong, I don't doubt Andrei authority on 
memory management topic (it is miles ahead of mine at the very 
least) but I simply don't believe any living person in this world 
can design such big change from scratch without some extended 
feedback from real deployed projects.

This is closely related to SemVer topic. I'd love to see D3. And 
D4 soon after. And probably new prime version increase every year 
or two. This allows to tinker with really big changes without 
being concerned about how it will affect your code in next 
release.

Don has been mentioning that Sociomantic is all for breaking the 
code for the greater good and I fully agree with him. But 
introducing such surprise solutions creates a huge risk of either 
sticking with imperfect design and patching it (what we have now) 
or changing same stuff back and forth every basic release (and 
_that_ is bad).
Oct 05 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 7:55 AM, Dicebot wrote:
 1) lack of "vision"
The vision is to expand user base and make a compelling case for using D alongside existing code bases. There are two important aspects to that: interoperability with C++, and using D without a garbage collector.
 Right now I have no idea where the development is headed and what to
 expect from next few releases. I am not speaking about
 wiki.dlang.org/Agenda but about bigger picture. Unexpected focus on C++
 support, thread about killing auto-decoding, recent ref counting
 proposal - all this stuff comes from language authors but does not feel
 like a strategic additions.
1. C++ support is good for attracting companies featuring large C++ codebases to get into D for new code without disruptions. 2. Auto-decoding is blown out of proportion and a distraction at this time. 3. Ref counting is necessary again for encouraging adoption. We've framed GC as an user education matter for years. We might have even been right for the most part, but it doesn't matter. Fact is that a large potential user base will simply not consider a GC language.
 It feels like yet another random
 contribution, no different from contribution/idea of any other D user.

 Anarchy-driven development is pretty cool thing in general but only if
 there is a base, a long-term vision all other contributions are built
 upon. And I think it is primary responsibility of language authors to
 define such as clear as possible. It is very difficult task but it
 simply can't be delegated.
I'm all about vision. I do agree we've been less so in the past.
 2) reliable release base

 I think this is most important part of open-source infrastructure needed
 to attract more contributions and something that also belongs to the
 "core team". I understand why Walter was so eager to delegate is but
 right now the truth is that once Andrew has to temporarily leave all
 release process has immediately stalled. And finding replacement is not
 easy - this task is inherently ungrateful as it implies spending time
 and resources on stuff you personally don't need at all.
We now have Martin Nowak as the point of contact.
 3) lack of field testing

 Too many new features get added simply because they look theoretically
 sound.
What would those be?
 I think it is quite telling that most robust parts of D are the
 ones that got designed based on mistake experience of other languages
 (primarily C++) and most innovations tend to fall into collection of
 hacks stockpiled together (something same C++ is infamous for).

 I am disturbed when Andrei comes with proposal that possibly affects
 whole damn Phobos (memeory management flags) and asks to trust his
 experience and authority on topic while rejecting patterns that are
 confirmed to be working well in real production projects.
Policy-based design is more than one decade old, and older under other guises. Reference counting is many decades old. Both have been humongous success stories for C++. No need to trust me or anyone, but at some point decisions will be made. Most decisions don't make everybody happy. To influence them it suffices to argue your case properly. I hope you don't have the feeling appeal to authority is used to counter real arguments. I _do_ trust my authority over someone else's, especially when I'm on hook for the decision made. I won't ever say "this is a disaster, but we did it because a guy on the forum said it'll work".
 Don't get me
 wrong, I don't doubt Andrei authority on memory management topic (it is
 miles ahead of mine at the very least) but I simply don't believe any
 living person in this world can design such big change from scratch
 without some extended feedback from real deployed projects.
Feedback is great, thanks. But we can't test everything before actually doing anything. I know how PBD works and I know how RC works, both from having hacked with them for years. I know where this will go, and it's somewhere good.
 This is closely related to SemVer topic. I'd love to see D3. And D4 soon
 after. And probably new prime version increase every year or two. This
 allows to tinker with really big changes without being concerned about
 how it will affect your code in next release.
Sorry, I'm not on board with this. I believe it does nothing than balkanize the community, and there's plenty of evidence from other because they have lock-in with their user base, monopoly on tooling, and a simple transition story ("give us more money").
 Don has been mentioning that Sociomantic is all for breaking the code
 for the greater good and I fully agree with him. But introducing such
 surprise solutions creates a huge risk of either sticking with imperfect
 design and patching it (what we have now) or changing same stuff back
 and forth every basic release (and _that_ is bad).
I don't see what is surprising about my vision. It's simple and clear. C++ and GC. C++ and GC. Andrei
Oct 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Sunday, 5 October 2014 at 15:38:58 UTC, Andrei Alexandrescu 
wrote:
 On 10/5/14, 7:55 AM, Dicebot wrote:
 1) lack of "vision"
The vision is to expand user base and make a compelling case for using D alongside existing code bases. There are two important aspects to that: interoperability with C++, and using D without a garbage collector.
 Right now I have no idea where the development is headed and 
 what to
 expect from next few releases. I am not speaking about
 wiki.dlang.org/Agenda but about bigger picture. Unexpected 
 focus on C++
 support, thread about killing auto-decoding, recent ref 
 counting
 proposal - all this stuff comes from language authors but does 
 not feel
 like a strategic additions.
1. C++ support is good for attracting companies featuring large C++ codebases to get into D for new code without disruptions. 2. Auto-decoding is blown out of proportion and a distraction at this time. 3. Ref counting is necessary again for encouraging adoption. We've framed GC as an user education matter for years. We might have even been right for the most part, but it doesn't matter. Fact is that a large potential user base will simply not consider a GC language.
No need to explain it here. When I speak about vision I mean something that anyone coming to dlang.org page or GitHub repo sees. Something that is explained in a bit more details, possibly with code examples. I know I am asking much but seeing quick reference for "imagine this stuff is implemented, this is how your program code will be affected and this is why it is a good thing" could have been huge deal. Right now your rationales get lost in forum discussion threads and it is hard to understand what really is Next Big Thing and what is just forum argument blown out of proportion. There was a go at properties, at eliminating destructors, at rvalue references and whatever else I have forgotten by now. It all pretty much ended with "do nothing" outcome for one reason or the other. The fact that you don't seem to have a consensus with Walter on some topic (auto-decoding, yeah) doesn't help either. Language marketing is not about posting links on reddit, it is a very hard work of communicating your vision so that it is clear even to random by-passer.
 2) reliable release base

 I think this is most important part of open-source 
 infrastructure needed
 to attract more contributions and something that also belongs 
 to the
 "core team". I understand why Walter was so eager to delegate 
 is but
 right now the truth is that once Andrew has to temporarily 
 leave all
 release process has immediately stalled. And finding 
 replacement is not
 easy - this task is inherently ungrateful as it implies 
 spending time
 and resources on stuff you personally don't need at all.
We now have Martin Nowak as the point of contact.
And what if he gets busy too? :)
 3) lack of field testing

 Too many new features get added simply because they look 
 theoretically
 sound.
What would those be?
Consider something like `inout`. It is a very look feature to address an issue specific to D and it looked perfectly reasonable when it was introduces. And right now there are some fishy hacks about it even in Phobos (like forced inout delegates in traits) that did come from originally unexpected usage cases. It is quite likely that re-designing it from scratch based on existing field experience would have yielded better results.
 Policy-based design is more than one decade old, and older 
 under other guises. Reference counting is many decades old. 
 Both have been humongous success stories for C++.
Probably I have missed the point where new proposal was added but original one was not using true policy-based design but set of enum flags instead (no way to use user-defined policy). Reference counting experience I am aware of shows that it is both successful in some cases and unapplicable for the others. But I don't know of any field experience that shows that chosing between RC and GC as a policy is a good/sufficient tool to minimize garbage creation in libraries - real issue we need to solve that original proposal does not mention at all.
 No need to trust me or anyone, but at some point decisions will 
 be made. Most decisions don't make everybody happy. To 
 influence them it suffices to argue your case properly. I hope 
 you don't have the feeling appeal to authority is used to 
 counter real arguments. I _do_ trust my authority over someone 
 else's, especially when I'm on hook for the decision made. I 
 won't ever say "this is a disaster, but we did it because a guy 
 on the forum said it'll work".
I don't want to waste your time arguing about irrelevant things simply because I have misinterprete how proposed solution fits the big picture. It is still unclear why proposed scheme is incompatible with tweaking Phobos utilities into input/output ranges. I am stipid and I am asking for detailed explanations before any arguments can be made :) And not just explanations for me but stuff anyone interested can find quickly.
 This is closely related to SemVer topic. I'd love to see D3. 
 And D4 soon
 after. And probably new prime version increase every year or 
 two. This
 allows to tinker with really big changes without being 
 concerned about
 how it will affect your code in next release.
Sorry, I'm not on board with this. I believe it does nothing than balkanize the community, and there's plenty of evidence from other languages (Perl, Python). Microsoft could afford to base, monopoly on tooling, and a simple transition story ("give us more money").
You risk balkanization by keeping the things as they are. We do have talks at work sometimes that simply forking the language may be a more practical approach than pushing necessary breaking changes upstream by the time D2 port is complete. Those are just talks of course and until porting is done it is all just speculations but it does indicate certain level of unhappinness.
 Don has been mentioning that Sociomantic is all for breaking 
 the code
 for the greater good and I fully agree with him. But 
 introducing such
 surprise solutions creates a huge risk of either sticking with 
 imperfect
 design and patching it (what we have now) or changing same 
 stuff back
 and forth every basic release (and _that_ is bad).
I don't see what is surprising about my vision. It's simple and clear. C++ and GC. C++ and GC.
It came as surpise. It is unclear how long will it stay. It is unclear what exactly is the goal. Have you ever considered starting a blog about your vision of D development to communicate it better to wider audience? :)
Oct 05 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean 
 something that anyone coming to dlang.org page or GitHub repo 
 sees. Something that is explained in a bit more details, 
 possibly with code examples. I know I am asking much but seeing 
 quick reference for "imagine this stuff is implemented, this is 
 how your program code will be affected and this is why it is a 
 good thing" could have been huge deal.
Something like this would be nice: http://golang.org/s/go14gc
Oct 05 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Sunday, 5 October 2014 at 21:59:21 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean 
 something that anyone coming to dlang.org page or GitHub repo 
 sees. Something that is explained in a bit more details, 
 possibly with code examples. I know I am asking much but 
 seeing quick reference for "imagine this stuff is implemented, 
 this is how your program code will be affected and this is why 
 it is a good thing" could have been huge deal.
Something like this would be nice: http://golang.org/s/go14gc
(sorry, replying to this answer because is shorter) Would a strategy where pointers are by default unique, and only become shared, weak and naked if explicitely declared as such?
Oct 05 2014
parent "eles" <eles215 gzk.dot> writes:
On Sunday, 5 October 2014 at 22:11:38 UTC, eles wrote:
 On Sunday, 5 October 2014 at 21:59:21 UTC, Ola Fosheim Grøstad 
 wrote:
 On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean 
 something that anyone coming to dlang.org page or GitHub repo 
 sees. Something that is explained in a bit more details, 
 possibly with code examples. I know I am asking much but 
 seeing quick reference for "imagine this stuff is 
 implemented, this is how your program code will be affected 
 and this is why it is a good thing" could have been huge deal.
Something like this would be nice: http://golang.org/s/go14gc
(sorry, replying to this answer because is shorter) Would a strategy where pointers are by default unique, and only become shared, weak and naked if explicitely declared as such?
of course, would it be viable?
Oct 05 2014
prev sibling next sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean 
 something that anyone coming to dlang.org page or GitHub repo 
 sees. Something that is explained in a bit more details, 
 possibly with code examples. I know I am asking much but seeing 
 quick reference for "imagine this stuff is implemented, this is 
 how your program code will be affected and this is why it is a 
 good thing" could have been huge deal.
 Right now your rationales get lost in forum discussion threads
Jerry Christmas, this right here! Andrei, I know you keep chanting "C++ and GC", and that's cool and all, but its also kind of meaningless because we can't read minds. Judging by the recent thread where someone (Ola?) asked what "C++ support" actually means and received precisely zero useful guidance, no one else does either. (This isn't to say I don't think it's important, but the scope of what you're doing right now and how it materially helps end users isn't really clear.)
 There was a go at properties
SALT. -Wyatt
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 5:42 AM, Wyatt wrote:
 On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean something
 that anyone coming to dlang.org page or GitHub repo sees. Something
 that is explained in a bit more details, possibly with code examples.
 I know I am asking much but seeing quick reference for "imagine this
 stuff is implemented, this is how your program code will be affected
 and this is why it is a good thing" could have been huge deal.
 Right now your rationales get lost in forum discussion threads
Jerry Christmas, this right here! Andrei, I know you keep chanting "C++ and GC", and that's cool and all, but its also kind of meaningless because we can't read minds.
I understand. What would be a good venue for discussing such topics? I thought the D language forum would be most appropriate. -- Andrei
Oct 06 2014
next sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Monday, 6 October 2014 at 13:54:05 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 5:42 AM, Wyatt wrote:
 On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean 
 something
 that anyone coming to dlang.org page or GitHub repo sees. 
 Something
 that is explained in a bit more details, possibly with code 
 examples.
 I know I am asking much but seeing quick reference for 
 "imagine this
 stuff is implemented, this is how your program code will be 
 affected
 and this is why it is a good thing" could have been huge deal.
 Right now your rationales get lost in forum discussion threads
Jerry Christmas, this right here! Andrei, I know you keep chanting "C++ and GC", and that's cool and all, but its also kind of meaningless because we can't read minds.
I understand. What would be a good venue for discussing such topics? I thought the D language forum would be most appropriate. -- Andrei
Sure, the newsgroup is a great place to discuss the minutiae of specific features and figure out how they might be implemented and what design tradeoffs need to be made. I think we've shown we can disagree about the colour of any bikeshed of any shape and construction at this point! But in what venue do you feel comfortable holding the easily-accessible public record of your intent for C++ support so anyone wondering about this new mantra can get the summary of what it means for them _as an end user_ without scouring the NG and partially-piecing-it-together-but-not-really from a dozen disparate posts? To be succinct: how about an article? We're not asking for a discussion in this case so much as some frank sharing. D is going to have C++ support. That's cool and compelling as a bare statement, but in what manner? What kinds of things will this allow that were impossible before? How, specifically, do you envision that to look? Can you give example code that you would expect to work when it's "done"? What are the drawbacks you believe forward engineers will have to watch out for? It's okay to not have all the answers and explain that there are parts that may not make it because of various reasons. I somewhat feel that you're approaching this situation as if it that were all quite obvious. Maybe it is to you? I don't know. But I do know I'm not alone in the dark here. Please bring a lamp. -Wyatt
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 7:36 AM, Wyatt wrote:
 To be succinct: how about an article?
An article would be great once we have done something we can show.
 We're not asking for a discussion in this case so much as some frank
 sharing.
Is there anything dishonest about sharing intent about the D programming language in the public forum of the D programming language?
 D is going to have C++ support.  That's cool and compelling as
 a bare statement, but in what manner?
We don't know yet, we're designing it - to wit, we're looking for help regarding exception interoperability.
 What kinds of things will this
 allow that were impossible before?
The exact list is in the air. We're looking e.g. for the best policy on exceptions. Possible vs. impossible is btw a matter of scale, for example wrapping everything you need from C++ in C functions is possible in the small but impossible at scale.
 How, specifically, do you envision
 that to look?
What is "that"?
 Can you give example code that you would expect to work
 when it's "done"?
This was discussed already: we should be able to pass an std::vector<int> by reference/pointer from C++ into D and use it within D directly, with no intervening marshaling.
 What are the drawbacks you believe forward engineers
 will have to watch out for?
What's a forward engineer?
 It's okay to not have all the answers and
 explain that there are parts that may not make it because of various
 reasons.

 I somewhat feel that you're approaching this situation as if it that
 were all quite obvious.  Maybe it is to you?  I don't know. But I do
 know I'm not alone in the dark here.  Please bring a lamp.
It seems this is a simple misunderstanding. You're looking for a virtually finished product (article documenting how it works; analysis; design; projected impact) whereas we're very much at the beginning of all that. What's clear to me is we need better interop with C++, so I've put that up for discussion as soon as reasonable. You're asking for things that are months into the future. Andrei
Oct 06 2014
parent "Wyatt" <wyatt.epp gmail.com> writes:
On Monday, 6 October 2014 at 15:05:31 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 7:36 AM, Wyatt wrote:
 D is going to have C++ support.  That's cool and compelling as
 a bare statement, but in what manner?
We don't know yet, we're designing it
 The exact list is in the air. We're looking e.g. for the best 
 policy on exceptions. Possible vs. impossible is btw a matter 
 of scale, for example wrapping everything you need from C++ in 
 C functions is possible in the small but impossible at scale.
Ah, I see what happened now! The way you've been pushing it, I was given to believe you had something resembling a "grand vision" of how you wanted "C++ interoperability" to work with some proposed syntax and semantics. If not something so grandiose, at least a pool of ideas written down? Or even just a mental list of things you think are important to cover? Regardless, these things ARE important to communicate clearly.
 This was discussed already: we should be able to pass an 
 std::vector<int> by reference/pointer from C++ into D and use 
 it within D directly, with no intervening marshaling.
Awesome, this is a start.
 It seems this is a simple misunderstanding. You're looking for 
 a virtually finished product
It really is a misunderstanding. Heck, I think it still is one because all we're really looking for is some inkling of what's on your agenda at a granularity finer than "C++ and GC". If nothing else, a list like that gets people thinking about a feature ahead of the dedicated thread to discuss it. -Wyatt PS: Come to think of it, I may have been expecting a DIP?
Oct 06 2014
prev sibling parent reply "Joakim" <dlang joakim.fea.st> writes:
On Monday, 6 October 2014 at 13:54:05 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 5:42 AM, Wyatt wrote:
 On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
 No need to explain it here. When I speak about vision I mean 
 something
 that anyone coming to dlang.org page or GitHub repo sees. 
 Something
 that is explained in a bit more details, possibly with code 
 examples.
 I know I am asking much but seeing quick reference for 
 "imagine this
 stuff is implemented, this is how your program code will be 
 affected
 and this is why it is a good thing" could have been huge deal.
 Right now your rationales get lost in forum discussion threads
Jerry Christmas, this right here! Andrei, I know you keep chanting "C++ and GC", and that's cool and all, but its also kind of meaningless because we can't read minds.
I understand. What would be a good venue for discussing such topics? I thought the D language forum would be most appropriate. -- Andrei
Answer: On Friday, 26 September 2014 at 10:22:50 UTC, Joakim wrote:
 It needs to be a page on the wiki or the main site, which you 
 or any user can link to anytime people want to know the plan.

 some sort of public plan of where you want the language to go.  
 All I'm asking for is a public list of preapproved and maybe 
 rejected features that the two of you maintain.  Dfix might be 
 on the preapproved list, ARC might be on the rejected. ;) You 
 could also outline broad priorities like C++ support or GC 
 improvement on such a webpage.
You and Walter do a good job of answering questions on Reddit and there's certainly a lot of discussion on the forum where the two of you chip in, but what's missing is a high-level overview of the co-BDFLs' priorities for where the language is going, including a list of features you'd like to see added, ie that are pre-approved (that doesn't mean _any_ implementation would be approved, only that feature in principle). Most users or wannabe contributors aren't going to go deep-diving through the forums for such direction. Manu and others recently wrote that the traffic on the forum has gone up, making it tougher for them to keep up. It would help if the two co-BFDLs did a better job articulating and communicating their vision in a public place, like a page on the wiki or dlang.org website, rather than buried in the haystack of the forums or reddit.
Oct 06 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 8:08 AM, Joakim wrote:
 You and Walter do a good job of answering questions on Reddit and
 there's certainly a lot of discussion on the forum where the two of you
 chip in, but what's missing is a high-level overview of the co-BDFLs'
 priorities for where the language is going, including a list of features
 you'd like to see added, ie that are pre-approved (that doesn't mean
 _any_ implementation would be approved, only that feature in principle).
Aye, something like that might be helpful. I'll keep it in mind.
 Most users or wannabe contributors aren't going to go deep-diving
 through the forums for such direction.
I'm not so sure that's a problem. It takes some connection to the language milieu before making a major contribution of the kind to be on a "vision" list. And once that connection is present, it's rather clear what the issues are. That's the case for any language.
 Manu and others recently wrote
 that the traffic on the forum has gone up, making it tougher for them to
 keep up.
Yah, there's been some growing pains. Traffic is on the rise and unfortunately the signal to noise ratio isn't. Converting the existing passion and sentiment into workable items is a present challenge I'm thinking of ways to approach.
 It would help if the two co-BFDLs did a better job
 articulating and communicating their vision in a public place, like a
 page on the wiki or dlang.org website, rather than buried in the
 haystack of the forums or reddit.
That's sensible. Andrei
Oct 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 9:14 AM, Dicebot wrote:
 On Sunday, 5 October 2014 at 15:38:58 UTC, Andrei Alexandrescu wrote:
 1. C++ support is good for attracting companies featuring large C++
 codebases to get into D for new code without disruptions.

 2. Auto-decoding is blown out of proportion and a distraction at this
 time.

 3. Ref counting is necessary again for encouraging adoption. We've
 framed GC as an user education matter for years. We might have even
 been right for the most part, but it doesn't matter. Fact is that a
 large potential user base will simply not consider a GC language.
No need to explain it here. When I speak about vision I mean something that anyone coming to dlang.org page or GitHub repo sees. Something that is explained in a bit more details, possibly with code examples. I know I am asking much but seeing quick reference for "imagine this stuff is implemented, this is how your program code will be affected and this is why it is a good thing" could have been huge deal.
I'm confused. Why would anyone who just comes to dlang.org see unformed ideas and incomplete designs? Wouldn't newcomers be more attracted by e.g. stuff coming in the next release?
 Right now your rationales get lost in forum discussion threads and it is
 hard to understand what really is Next Big Thing and what is just forum
 argument blown out of proportion. There was a go at properties, at
 eliminating destructors, at rvalue references and whatever else I have
 forgotten by now. It all pretty much ended with "do nothing" outcome for
 one reason or the other.
Let's see. We have properties, which indeed need some work done but don't seem to prevent people from getting work done. The discussion on eliminating destructors concluded with "we don't want to do that for good reasons". For binding rvalues Walter has a tentative design that's due for an RFC soon.
 The fact that you don't seem to have a consensus with Walter on some
 topic (auto-decoding, yeah) doesn't help either. Language marketing is
 not about posting links on reddit, it is a very hard work of
 communicating your vision so that it is clear even to random by-passer.
I think one good thing we can do is approach things in private before discussing them publicly.
 2) reliable release base

 I think this is most important part of open-source infrastructure needed
 to attract more contributions and something that also belongs to the
 "core team". I understand why Walter was so eager to delegate is but
 right now the truth is that once Andrew has to temporarily leave all
 release process has immediately stalled. And finding replacement is not
 easy - this task is inherently ungrateful as it implies spending time
 and resources on stuff you personally don't need at all.
We now have Martin Nowak as the point of contact.
And what if he gets busy too? :)
Maybe you'll volunteer.
 3) lack of field testing

 Too many new features get added simply because they look theoretically
 sound.
What would those be?
Consider something like `inout`. It is a very look feature to address an issue specific to D and it looked perfectly reasonable when it was introduces. And right now there are some fishy hacks about it even in Phobos (like forced inout delegates in traits) that did come from originally unexpected usage cases. It is quite likely that re-designing it from scratch based on existing field experience would have yielded better results.
No doubt its design could be done better. But inout was not motivated theoretically. It came from the practical need of not duplicating code over qualifiers.
 Policy-based design is more than one decade old, and older under other
 guises. Reference counting is many decades old. Both have been
 humongous success stories for C++.
Probably I have missed the point where new proposal was added but original one was not using true policy-based design but set of enum flags instead (no way to use user-defined policy).
Sean proposed that. In fact that's a very good success story of sharing stuff for discussion sooner rather than later: he answered a Request For Comments with a great comment.
 Reference counting
 experience I am aware of shows that it is both successful in some cases
 and unapplicable for the others. But I don't know of any field
 experience that shows that chosing between RC and GC as a policy is a
 good/sufficient tool to minimize garbage creation in libraries - real
 issue we need to solve that original proposal does not mention at all.
That's totally fine. A good design can always add a few innovation on top of known territory. In fact we've done some experimenting at Facebook with fully collected PHP (currently it's reference counted). RC is well understood as an alternative/complement of tracing. Anyhow, discussion is what the Request For Comments is good for. But please let's focus on actionable stuff. I can't act on vague doubts.
 No need to trust me or anyone, but at some point decisions will be
 made. Most decisions don't make everybody happy. To influence them it
 suffices to argue your case properly. I hope you don't have the
 feeling appeal to authority is used to counter real arguments. I _do_
 trust my authority over someone else's, especially when I'm on hook
 for the decision made. I won't ever say "this is a disaster, but we
 did it because a guy on the forum said it'll work".
I don't want to waste your time arguing about irrelevant things simply because I have misinterprete how proposed solution fits the big picture. It is still unclear why proposed scheme is incompatible with tweaking Phobos utilities into input/output ranges. I am stipid and I am asking for detailed explanations before any arguments can be made :) And not just explanations for me but stuff anyone interested can find quickly.
Again: I don't have a complete design, that's why I'm asking for comments in the Request For Comments threads. Would you rather have me come up alone with a complete design and then show it to the community as a fait accompli? What part of "let's do this together" I need to clarify?
 This is closely related to SemVer topic. I'd love to see D3. And D4 soon
 after. And probably new prime version increase every year or two. This
 allows to tinker with really big changes without being concerned about
 how it will affect your code in next release.
Sorry, I'm not on board with this. I believe it does nothing than balkanize the community, and there's plenty of evidence from other because they have lock-in with their user base, monopoly on tooling, and a simple transition story ("give us more money").
You risk balkanization by keeping the things as they are. We do have talks at work sometimes that simply forking the language may be a more practical approach than pushing necessary breaking changes upstream by the time D2 port is complete. Those are just talks of course and until porting is done it is all just speculations but it does indicate certain level of unhappinness.
It would be terrific if Sociomantic would improve its communication with the community about their experience with D and their needs going forward.
 Don has been mentioning that Sociomantic is all for breaking the code
 for the greater good and I fully agree with him. But introducing such
 surprise solutions creates a huge risk of either sticking with imperfect
 design and patching it (what we have now) or changing same stuff back
 and forth every basic release (and _that_ is bad).
I don't see what is surprising about my vision. It's simple and clear. C++ and GC. C++ and GC.
It came as surpise. It is unclear how long will it stay. It is unclear what exactly is the goal. Have you ever considered starting a blog about your vision of D development to communicate it better to wider audience? :)
Yah, apparently there's no shortage of ideas of things I should work on. Perhaps I should do the same. Dicebot, I think you should work on making exceptions refcounted :o). Andrei
Oct 06 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu 
wrote:
 I'm confused. Why would anyone who just comes to dlang.org see 
 unformed ideas and incomplete designs? Wouldn't newcomers be 
 more attracted by e.g. stuff coming in the next release?
Because he is interested in language development direction but does not want to actively participate? It can be someone with bad early D experience wondering if anything has changed in last year. Or it can be developer from some company using D wanting to get quick overview what to expect from the language for the next year or so. For example I don't have time to follow Rust mail lists or GitHub commits but I do read blog posts of its developers regularly (including speculative ones) to see where it is heading. It is both interesting and educating and helps to spread the image among wider audience as well.
 The fact that you don't seem to have a consensus with Walter 
 on some
 topic (auto-decoding, yeah) doesn't help either. Language 
 marketing is
 not about posting links on reddit, it is a very hard work of
 communicating your vision so that it is clear even to random 
 by-passer.
I think one good thing we can do is approach things in private before discussing them publicly.
Agreed. I don't propose to stop paying attention to forums or drop all discussions but to put a bit more efforts into popularizing resulting decisions and plans. So that someone can safely ignore some of discussions without fearing that it will surprisingly appear in next release catching one off guard.
 We now have Martin Nowak as the point of contact.
And what if he gets busy too? :)
Maybe you'll volunteer.
I have already considered that and can be pretty sure this won't ever happen (at least not while this implies paying to Apple a single cent) Let's get it straight - I don't care much about D success in general. It is a nice language to use here and there, I got an awesome job because of it but this is pretty much all the scope. There is no way I will ever work on something that is not needed to me only because it is important for language success in general. This is pretty much the difference between language author / core developer and random contributor and why handling releases is safer in the hands of former.
 No doubt its design could be done better. But inout was not 
 motivated theoretically. It came from the practical need of not 
 duplicating code over qualifiers.
I don't mean feature itself was "theoretical". I mean that it was implemented and released before it got at least some practical usage in live projects with relevant feedback and thus have missed some corner cases.
 Sean proposed that. In fact that's a very good success story of 
 sharing stuff for discussion sooner rather than later: he 
 answered a Request For Comments with a great comment.
Well when I have initially asked the same question (why not user-controllable policies?) you straight out rejected it. I must be very bad at wording questions :(
 Again: I don't have a complete design, that's why I'm asking 
 for comments in the Request For Comments threads. Would you 
 rather have me come up alone with a complete design and then 
 show it to the community as a fait accompli? What part of 
 "let's do this together" I need to clarify?
"let's do this together" implies agreeing on some base to further work on. When I come and see that proposed solution does not address a problem I have at all I can't do anything but ask "how is this supposed to address my problem?" because that is _your_ proposal and I am not gifted with telepathy. Especially because you have stated that previous proposal (range-fication) which did fix the issue _for me_ is not on the table anymore.
 You risk balkanization by keeping the things as they are. We 
 do have
 talks at work sometimes that simply forking the language may 
 be a more
 practical approach than pushing necessary breaking changes 
 upstream by
 the time D2 port is complete. Those are just talks of course 
 and until
 porting is done it is all just speculations but it does 
 indicate certain
 level of unhappinness.
It would be terrific if Sociomantic would improve its communication with the community about their experience with D and their needs going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what can be done to make communication more clear.
 Have you ever considered starting a blog about your vision of D
 development to communicate it better to wider audience? :)
Yah, apparently there's no shortage of ideas of things I should work on. Perhaps I should do the same. Dicebot, I think you should work on making exceptions refcounted :o).
As soon as it becomes a priority issue for me or Sociomantic (likely latter as I don't do much private D stuff anymore). However your attempt to be sarcastic here does indicate that you have totally missed the point I was stressing in original comment. Writing a blog post once in a few months is hardly an effort comparable to reimplementing exceptions management but is much more important long term because no one but you can do it. In this sense, yes, it is much more pragmatical to wait for someone like me to work on reference counted exceptions and for you to focus on communication instead. Worst thing that can happen is that nothing gets done which is still better than something unexpected and disruptive getting done.
Oct 06 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Oct 06, 2014 at 06:13:41PM +0000, Dicebot via Digitalmars-d wrote:
 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu wrote:
[...]
It would be terrific if Sociomantic would improve its communication
with the community about their experience with D and their needs
going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what can be done to make communication more clear.
I don't remember who it was, but I'm pretty sure *somebody* at Sociomantic has stated clearly their request recently: Please break our code *now*, if it helps to fix language design issues, rather than later. Or, if you'll allow me to paraphrase it, pay the one-time cost of broken code now, rather than incur the ongoing cost of needing to continually workaround language issues. T -- Too many people have open minds but closed eyes.
Oct 06 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 6 October 2014 at 18:57:04 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Or, if you'll allow me to paraphrase it, pay the one-time cost 
 of broken
 code now, rather than incur the ongoing cost of needing to 
 continually
 workaround language issues.
Don in this very thread. Multiple times.
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 12:00 PM, Dicebot wrote:
 On Monday, 6 October 2014 at 18:57:04 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 Or, if you'll allow me to paraphrase it, pay the one-time cost of broken
 code now, rather than incur the ongoing cost of needing to continually
 workaround language issues.
Don in this very thread. Multiple times.
He made a few good and very specific points that subsequently saw action. This is the kind of feedback we need more of. -- Andrei
Oct 06 2014
parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Monday, 6 October 2014 at 19:08:24 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 12:00 PM, Dicebot wrote:
 On Monday, 6 October 2014 at 18:57:04 UTC, H. S. Teoh via 
 Digitalmars-d
 wrote:
 Or, if you'll allow me to paraphrase it, pay the one-time 
 cost of broken
 code now, rather than incur the ongoing cost of needing to 
 continually
 workaround language issues.
Don in this very thread. Multiple times.
He made a few good and very specific points that subsequently saw action. This is the kind of feedback we need more of. -- Andrei
And here we go again for the multiple alias this: I'm pleased to have seen that it will be merged sooner than later. Just to clarify, taking as an example our company: - TDPL is a very good training book for C++/Java minions, and turns them in, well, not-so-good-but-not-so-terrible D programmers. It solve the "boss" perplexity about "there's basically no markets for D language programmers: how can we hire them in the future?". For the chronicle, the next lecture is the EXCELLENT "D Templates: a tutorial", of Philippe Sigaud, an invaluable resource (thank Philippe for that!). - TDPL is exactly what Dicebot wrote: a plan! Having to bet on something, a CTO like me *likes* to bet on a good plan (like the A-Team!) - Being a good plan, and an ambitious one, as a company we scrutiny the efforts devoted to complete it, and that set the bar for future evaluation of the reliability of _future_ plans and proposal. As an example, the *not resolution* of the shared qualifier mess, has a costs in term of how reliable we judge other proposed improvements (I know, that may be not fare, but that's it). I'm not telling that the language must be crystallised, and I also understand that as times goes by, other priorities and good ideas may come up. As a company, we don't mind if we are discussing about ARC, GC, or C++ interop, but we care about the efforts and time placed on the _taken_ decision, especially for the _past_ plans, and we judge that care as strictly correlated to language maturity for business adoption. Just my 2c... again, no pun intended! ;-P --- /Paolo
Oct 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
 On Mon, Oct 06, 2014 at 06:13:41PM +0000, Dicebot via Digitalmars-d wrote:
 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu wrote:
[...]
 It would be terrific if Sociomantic would improve its communication
 with the community about their experience with D and their needs
 going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what can be done to make communication more clear.
I don't remember who it was, but I'm pretty sure *somebody* at Sociomantic has stated clearly their request recently: Please break our code *now*, if it helps to fix language design issues, rather than later.
More particulars would be definitely welcome. I should add that Sociomantic has an interesting position: it's a 100% D shop so interoperability is not a concern for them, and they did their own GC so GC-related improvements are unlikely to make a large difference for them. So "C++ and GC" is likely not to be high priority for them. -- Andrei
Oct 06 2014
next sibling parent reply "Don" <x nospam.com> writes:
On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
 On Mon, Oct 06, 2014 at 06:13:41PM +0000, Dicebot via 
 Digitalmars-d wrote:
 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei 
 Alexandrescu wrote:
[...]
 It would be terrific if Sociomantic would improve its 
 communication
 with the community about their experience with D and their 
 needs
 going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what can be done to make communication more clear.
I don't remember who it was, but I'm pretty sure *somebody* at Sociomantic has stated clearly their request recently: Please break our code *now*, if it helps to fix language design issues, rather than later.
More particulars would be definitely welcome. I should add that Sociomantic has an interesting position: it's a 100% D shop so interoperability is not a concern for them, and they did their own GC so GC-related improvements are unlikely to make a large difference for them. So "C++ and GC" is likely not to be high priority for them. -- Andrei
Exactly. C++ support is of no interest at all, and GC is something we contribute to, rather than something we expect from the community. Interestingly we don't even care much about libraries, we've done everything ourselves. So what do we care about? Mainly, we care about improving the core product. In general I think that in D we have always suffered from spreading ourselves too thin. We've always had a bunch of cool new features that don't actually work properly. Always, the focus shifts to something else, before the previous feature was finished. At Sociomantic, we've been successful in our industry using only the features of D1. We're restricted to using D's features from 2007!! Feature-wise, practically nothing from the last seven years has helped us! With something like C++ support, it's only going to win companies over when it is essentially complete. That means that working on it is a huge investment that doesn't start to pay for itself for a very long time. So although it's a great goal, with a huge potential payoff, I don't think that it should be consuming a whole lot of energy right now. And personally, I doubt that many companies would use D, even if with perfect C++ interop, if the toolchain stayed at the current level. As I said in my Dconf 2013 talk -- I advocate a focus on Return On Investment. I'd love to see us chasing the easy wins.
Oct 08 2014
next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 08/10/2014 9:20 pm, "Don via Digitalmars-d" <digitalmars-d puremagic.com>
wrote:
 On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu wrote:
 On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
 On Mon, Oct 06, 2014 at 06:13:41PM +0000, Dicebot via Digitalmars-d
wrote:
 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu wrote:
[...]
 It would be terrific if Sociomantic would improve its communication
 with the community about their experience with D and their needs
 going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what can be done to make communication more clear.
I don't remember who it was, but I'm pretty sure *somebody* at Sociomantic has stated clearly their request recently: Please break our code *now*, if it helps to fix language design issues, rather than later.
More particulars would be definitely welcome. I should add that
Sociomantic has an interesting position: it's a 100% D shop so interoperability is not a concern for them, and they did their own GC so GC-related improvements are unlikely to make a large difference for them. So "C++ and GC" is likely not to be high priority for them. -- Andrei
 Exactly. C++ support is of no interest at all, and GC is something we
contribute to, rather than something we expect from the community.
 Interestingly we don't even care much about libraries, we've done
everything ourselves.
 So what do we care about? Mainly, we care about improving the core
product.
 In general I think that in D we have always suffered from spreading
ourselves too thin. We've always had a bunch of cool new features that don't actually work properly. Always, the focus shifts to something else, before the previous feature was finished.
 At Sociomantic, we've been successful in our industry using only the
features of D1. We're restricted to using D's features from 2007!! Feature-wise, practically nothing from the last seven years has helped us!
 With something like C++ support, it's only going to win companies over
when it is essentially complete. That means that working on it is a huge investment that doesn't start to pay for itself for a very long time. So although it's a great goal, with a huge potential payoff, I don't think that it should be consuming a whole lot of energy right now.
 And personally, I doubt that many companies would use D, even if with
perfect C++ interop, if the toolchain stayed at the current level.
 As I said in my Dconf 2013 talk -- I advocate a focus on Return On
Investment.
 I'd love to see us chasing the easy wins.
As someone who previously represented a business interest, I couldn't agree more. Aside from my random frustrated outbursts on a very small set of language issues, the main thing I've been banging on from day 1 is the tooling. Much has improved, but it's still a long way from 'good'. Debugging, ldc (for windows), and editor integrations (auto complete, navigation, refactoring tools) are my impersonal (and hopefully non-controversial) short list. They trump everything else I've ever complained about. The debugging experience is the worst of any language I've used since the 90's, and I would make that top priority. C++ might have helped us years ago, but I already solved those issues creatively. Debugging can't be solved without tooling and compiler support.
Oct 08 2014
next sibling parent reply "Joakim" <dlang joakim.fea.st> writes:
On Wednesday, 8 October 2014 at 13:55:11 UTC, Manu via 
Digitalmars-d wrote:
 On 08/10/2014 9:20 pm, "Don via Digitalmars-d"
 So what do we care about? Mainly, we care about improving the 
 core
product.
 In general I think that in D we have always suffered from 
 spreading
ourselves too thin. We've always had a bunch of cool new features that don't actually work properly. Always, the focus shifts to something else, before the previous feature was finished.
 And personally, I doubt that many companies would use D, even 
 if with
perfect C++ interop, if the toolchain stayed at the current level. As someone who previously represented a business interest, I couldn't agree more. Aside from my random frustrated outbursts on a very small set of language issues, the main thing I've been banging on from day 1 is the tooling. Much has improved, but it's still a long way from 'good'. Debugging, ldc (for windows), and editor integrations (auto complete, navigation, refactoring tools) are my impersonal (and hopefully non-controversial) short list. They trump everything else I've ever complained about. The debugging experience is the worst of any language I've used since the 90's, and I would make that top priority.
While it would be great if there were a company devoted to such D tooling, it doesn't exist right now. It is completely unrealistic to expect a D community of unpaid volunteers to work on these features for your paid projects. If anybody in the community cared as much about these features as you, they'd have done it already. I suggest you two open bugzilla issues for all these specific bugs or enhancements and put up bounties for their development. If you're not willing to do that, expecting the community to do work for you for free is just whining that is easily ignored.
Oct 08 2014
next sibling parent reply "Jonathan" <jadit2 gmail.com> writes:
My small list of D critiques/wishes from a pragmatic stance:

1) Replace the Stop the World GC
2) It would be great if dmd could provide a code-hinting 
facility, instead of relying on DCD which continually breaks for 
me. It would open more doors for editors to support better code 
completion.
3) Taking a hint from the early success of Flash, add Derelict3 
(or some basic OpenGL library) directly into Phobos. Despite some 
of the negatives (slower update cycle versus external lib), it 
would greatly add to D's attractiveness for new developers. I 
nearly left D after having a host issues putting Derelict3 into 
my project. If I had this issue, we may be missing out from 
attracting newbies looking to do gfx related work.

I'm sure this has been talked about, but I'll bring it up anyway:
To focus our efforts, consider switching to ldc. Is it worth 
people's time to continue to optimize DMD when we can accelerate 
our own efforts by relying on an existing compiler? As some have 
pointed out, our community is spread thin over so many efforts... 
perhaps there are ways to consolidate that.

Just my 2cents from a D hobbyist..
Oct 08 2014
next sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, Oct 8, 2014 at 12:00 PM, Jonathan via Digitalmars-d <
digitalmars-d puremagic.com> wrote:
...

 3) Taking a hint from the early success of Flash, add Derelict3 (or some
 basic OpenGL library) directly into Phobos. Despite some of the negatives
 (slower update cycle versus external lib), it would greatly add to D's
 attractiveness for new developers. I nearly left D after having a host
 issues putting Derelict3 into my project. If I had this issue, we may be
 missing out from attracting newbies looking to do gfx related work.
Personally I take the opposite view - I'd much prefer a strong and easily consumed third-party library ecosystem than to shove everything into Phobos. Dub is a wonderful thing for D, and needs to be so good that people use it by default.
Oct 08 2014
prev sibling parent "K.K." <trampzy yahoo.com> writes:
On Wednesday, 8 October 2014 at 19:00:44 UTC, Jonathan wrote:
 3) Taking a hint from the early success of Flash, add Derelict3 
 (or some basic OpenGL library) directly into Phobos. Despite 
 some of the negatives (slower update cycle versus external 
 lib), it would greatly add to D's attractiveness for new 
 developers. I nearly left D after having a host issues putting 
 Derelict3 into my project. If I had this issue, we may be 
 missing out from attracting newbies looking to do gfx related 
 work.
This reminds of an idea I've been pondering over for a while now. What if there was a language, that came with a standard toolkit for the more fun stuff such as OpenGL (There could be one already and I just don't know of it). But if we take that idea and try to apply it to D, we sortof get Deimos. Problem is Deimos, is pretty disjointed and is only updated every now and then, so then as an alternative I suppose there is Derelict. However, Derelict is maintained primarily by one person it seems (he does a great job though!), but Derelict isn't a standard feature (I know Deimos isn't either) and I *personally* don't care much for it's heavy leaning on dub. +Derelict isn't always a walk in the park to get running The alternative I'm suggesting, not by any means a top priority, is give Deimos a makeover (Derelict could possibly be a big part of this) and turn it into a semi-standard feature. So you can import phobos modules to do the stuff phobos normally does, but if you feel like making a quick tool or two, you can import deimos to get Tcl/tk like you would in python, or call OpenGl, or whatever other tool you need (doesn't have to be a graphics thing). Then at compile time the compiler could just copy or build the required dll's\so's & object files into the specified directory, or whatever works best. On Wednesday, 8 October 2014 at 19:47:05 UTC, Jeremy Powers via Digitalmars-d wrote:
 Personally I take the opposite view - I'd much prefer a strong 
 and easily
 consumed third-party library ecosystem than to shove everything 
 into
 Phobos.  Dub is a wonderful thing for D, and needs to be so 
 good that
 people use it by default.
Not to initiate my biweekly "not a fan of dub" conversation, but just wanna say real quick: Not everyone really likes to use dub. The only thing I like about it, is using it as the build script for a library to get the .lib files and whatnot. Though I don't feel like it really adds a drastic improvement over a .d build script. However, I don't work on any open source libraries, but maybe if I did, I'd like it better then..? Not something I would have an answer to right now, soo... yea:P ---- Aside from what I mentioned above, I'm not sure where I'd like D to be at next to be perfectly honest. Stuff like no GC or C++ integration sound cool, but for me personally they just seem like 'neat' feature, not something that I feel like would change my current workflow for better or worse. Refinement of what's already in place sounds the best if anything. So those are just some passing thoughts of a stranger.. carry on, this thread has been very interesting so far ;P
Oct 11 2014
prev sibling next sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 09/10/2014 2:40 am, "Joakim via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Wednesday, 8 October 2014 at 13:55:11 UTC, Manu via Digitalmars-d
wrote:
 On 08/10/2014 9:20 pm, "Don via Digitalmars-d"
 So what do we care about? Mainly, we care about improving the core
product.
 In general I think that in D we have always suffered from spreading
ourselves too thin. We've always had a bunch of cool new features that don't actually work properly. Always, the focus shifts to something else, before the previous feature was finished.
 And personally, I doubt that many companies would use D, even if with
perfect C++ interop, if the toolchain stayed at the current level. As someone who previously represented a business interest, I couldn't
agree
 more.
 Aside from my random frustrated outbursts on a very small set of language
 issues, the main thing I've been banging on from day 1 is the tooling.
Much
 has improved, but it's still a long way from 'good'.

 Debugging, ldc (for windows), and editor integrations (auto complete,
 navigation, refactoring tools) are my impersonal (and hopefully
 non-controversial) short list. They trump everything else I've ever
 complained about.
 The debugging experience is the worst of any language I've used since the
 90's, and I would make that top priority.
While it would be great if there were a company devoted to such D
tooling, it doesn't exist right now. It is completely unrealistic to expect a D community of unpaid volunteers to work on these features for your paid projects. If anybody in the community cared as much about these features as you, they'd have done it already.
 I suggest you two open bugzilla issues for all these specific bugs or
enhancements and put up bounties for their development. If you're not willing to do that, expecting the community to do work for you for free is just whining that is easily ignored. We're just talking about what we think would best drive adoption. Businesses aren't likely to adopt a language with the understanding that they need to write it's tooling. Debugging, code competition and refactoring are all expert tasks that probably require compiler involvement. I know it's easy to say that businesses with budget should contribute more. But it's a tough proposition. Businesses will look to change language if it saves them time and money. If it's going to cost them money, and the state of tooling is likely to cost them time, then it's not a strong proposition. It's a chicken and egg problem. I'm sure business will be happy to contribute financially when it's a risk free investment; ie, when it's demonstrated that that stuff works for them.
Oct 08 2014
prev sibling parent reply Danni Coy via Digitalmars-d <digitalmars-d puremagic.com> writes:
 While it would be great if there were a company devoted to such D tooling,
 it doesn't exist right now.  It is completely unrealistic to expect a D
 community of unpaid volunteers to work on these features for your paid
 projects.  If anybody in the community cared as much about these features as
 you, they'd have done it already.
It might be unfair but it is still a massive problem. The tooling compared to what I have with say C++ and Qt is not a fun experience. The language is nicer but the difference in tooling is making the difference seem a lot smaller than it should be.
Oct 09 2014
parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 09:37:29 UTC, Danni Coy via 
Digitalmars-d wrote:
 It might be unfair but it is still a massive problem. The 
 tooling
 compared to what I have with say C++ and Qt is not a fun 
 experience.
 The language is nicer but the difference in tooling is making 
 the
 difference seem a lot smaller than it should be.
This tooling does not exist in C++ magically. It exists there because many people were willing to work on it and much more were willing to pay for it. So unless you personally are ready to do one of those things those expectations will never come true.
Oct 09 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
 Debugging, ldc (for windows), and editor integrations (auto 
 complete,
 navigation, refactoring tools) are my impersonal (and hopefully
 non-controversial) short list. They trump everything else I've
I don't know how well DCD works with other editors, but in Emacs at least (when DCD doesn't throw an exception, I need to chase those down), autocomplete and code navigation just work. _Including_ dub dependencies.
 The debugging experience is the worst of any language I've used 
 since the
 90's, and I would make that top priority.
Debugging can definitely be improved on. Even with Iain's fork of gdb I end up using writeln instead because it's frequently easier. Atila
Oct 09 2014
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 09/10/2014 10:15 pm, "Atila Neves via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 Debugging, ldc (for windows), and editor integrations (auto complete,
 navigation, refactoring tools) are my impersonal (and hopefully
 non-controversial) short list. They trump everything else I've
I don't know how well DCD works with other editors, but in Emacs at least
(when DCD doesn't throw an exception, I need to chase those down), autocomplete and code navigation just work. _Including_ dub dependencies. I haven't found a way to make use of dub yet; all my projects involve other languages too. Also, I'm a primary windows user, so I haven't tried DCD to any great length. Alexander's auto complete is getting better, but it still gets easily confused, and the refactor and navigation tools are still basically missing. I feel like it would all be much easier if dmd was a lib that tooling could make use of. It seems like a lot of redundant effort to rewrite the language parser over and over when dmd already does it perfectly... Dan Murphy seemed to think ddmd would have some focus on usage as a lib?
 The debugging experience is the worst of any language I've used since the
 90's, and I would make that top priority.
Debugging can definitely be improved on. Even with Iain's fork of gdb I
end up using writeln instead because it's frequently easier. Iain's work doesn't help on windows sadly. But I think the biggest problem is the local scope doesn't seem to be properly expressed in the debug info. It's a compiler problem more than a back end problem. The step cursor is all over the shop, and local variables alias each other and confuse the debugger lots. Also I can't inspect the contents of classes.
Oct 09 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Manu via Digitalmars-d" <digitalmars-d puremagic.com> wrote in message 
news:mailman.559.1412859804.9932.digitalmars-d puremagic.com...

 Dan Murphy seemed to think ddmd would have some focus on usage as a lib?
Yes, but it's a long way off.
Oct 09 2014
prev sibling next sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 08/10/2014 11:55 pm, "Manu" <turkeyman gmail.com> wrote:
 On 08/10/2014 9:20 pm, "Don via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu wrote:
 On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
 On Mon, Oct 06, 2014 at 06:13:41PM +0000, Dicebot via Digitalmars-d
wrote:
 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei Alexandrescu wrote:
[...]
 It would be terrific if Sociomantic would improve its communication
 with the community about their experience with D and their needs
 going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what
can
 be done to make communication more clear.
I don't remember who it was, but I'm pretty sure *somebody* at Sociomantic has stated clearly their request recently: Please break
our
 code *now*, if it helps to fix language design issues, rather than
 later.
More particulars would be definitely welcome. I should add that
Sociomantic has an interesting position: it's a 100% D shop so interoperability is not a concern for them, and they did their own GC so GC-related improvements are unlikely to make a large difference for them. So "C++ and GC" is likely not to be high priority for them. -- Andrei
 Exactly. C++ support is of no interest at all, and GC is something we
contribute to, rather than something we expect from the community.
 Interestingly we don't even care much about libraries, we've done
everything ourselves.
 So what do we care about? Mainly, we care about improving the core
product.
 In general I think that in D we have always suffered from spreading
ourselves too thin. We've always had a bunch of cool new features that don't actually work properly. Always, the focus shifts to something else, before the previous feature was finished.
 At Sociomantic, we've been successful in our industry using only the
features of D1. We're restricted to using D's features from 2007!! Feature-wise, practically nothing from the last seven years has helped us!
 With something like C++ support, it's only going to win companies over
when it is essentially complete. That means that working on it is a huge investment that doesn't start to pay for itself for a very long time. So although it's a great goal, with a huge potential payoff, I don't think that it should be consuming a whole lot of energy right now.
 And personally, I doubt that many companies would use D, even if with
perfect C++ interop, if the toolchain stayed at the current level.
 As I said in my Dconf 2013 talk -- I advocate a focus on Return On
Investment.
 I'd love to see us chasing the easy wins.
As someone who previously represented a business interest, I couldn't
agree more.
 Aside from my random frustrated outbursts on a very small set of language
issues, the main thing I've been banging on from day 1 is the tooling. Much has improved, but it's still a long way from 'good'.
 Debugging, ldc (for windows), and editor integrations (auto complete,
navigation, refactoring tools) are my impersonal (and hopefully non-controversial) short list. They trump everything else I've ever complained about.
 The debugging experience is the worst of any language I've used since the
90's, and I would make that top priority.
 C++ might have helped us years ago, but I already solved those issues
creatively. Debugging can't be solved without tooling and compiler support. Just to clarify, I'm all for nogc work; that is very important to us and I appreciate the work, but I support that I wouldn't rate it top priority. C++ is no significant value to me personally, or professionally. Game studios don't use much C++, and like I said, we already worked around those edges. I can't speak for remedy now, but I'm confident that they will *need* ldc working before the game ships. DMD codegen is just not good enough, particularly relating to float; it uses the x87! O_O
Oct 08 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/8/2014 4:17 AM, Don wrote:
 As I said in my Dconf 2013 talk -- I advocate a focus on Return On Investment.
 I'd love to see us chasing the easy wins.
I love the easy wins, too. It'd be great if you'd start a thread about "Top 10 Easy Wins" from yours and Sociomantic's perspective. Note that I've done some work on the deprecations you've mentioned before.
Oct 08 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 8 October 2014 at 20:07:09 UTC, Walter Bright wrote:
 On 10/8/2014 4:17 AM, Don wrote:
 As I said in my Dconf 2013 talk -- I advocate a focus on 
 Return On Investment.
 I'd love to see us chasing the easy wins.
I love the easy wins, too. It'd be great if you'd start a thread about "Top 10 Easy Wins" from yours and Sociomantic's perspective. Note that I've done some work on the deprecations you've mentioned before.
That can possibly be done though it will take some efforts to formalize issues from the casual chat rants. More important issue is - what will happen next? I am pretty sure many of those easy wins are not easy at all in a sense that breaking changes are needed.
Oct 09 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/9/2014 7:15 AM, Dicebot wrote:
 That can possibly be done though it will take some efforts to formalize issues
 from the casual chat rants. More important issue is - what will happen next? I
 am pretty sure many of those easy wins are not easy at all in a sense that
 breaking changes are needed.
That's why I'd like to know what Don has in mind.
Oct 09 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/8/14, 4:17 AM, Don wrote:
 On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu wrote:
 More particulars would be definitely welcome. I should add that
 Sociomantic has an interesting position: it's a 100% D shop so
 interoperability is not a concern for them, and they did their own GC
 so GC-related improvements are unlikely to make a large difference for
 them. So "C++ and GC" is likely not to be high priority for them. --
 Andrei
Exactly. C++ support is of no interest at all, and GC is something we contribute to, rather than something we expect from the community.
That's awesome, thanks!
 Interestingly we don't even care much about libraries, we've done
 everything ourselves.

 So what do we care about? Mainly, we care about improving the core product.

 In general I think that in D we have always suffered from spreading
 ourselves too thin. We've always had a bunch of cool new features that
 don't actually work properly. Always, the focus shifts to something
 else, before the previous feature was finished.

 At Sociomantic, we've been successful in our industry using only the
 features of D1. We're restricted to using D's features from 2007!!
 Feature-wise, practically nothing from the last seven years has helped us!

 With something like C++ support, it's only going to win companies over
 when it is essentially complete. That means that working on it is a huge
 investment that doesn't start to pay for itself for a very long time. So
 although it's a great goal, with a huge potential payoff, I don't think
 that it should be consuming a whole lot of energy right now.

 And personally, I doubt that many companies would use D, even if with
 perfect C++ interop, if the toolchain stayed at the current level.
That speculation turns out to not be true for Facebook. My turn to speculate - many other companies have existing codebases in C++, so Sociomantic is "special".
 As I said in my Dconf 2013 talk -- I advocate a focus on Return On
 Investment.
 I'd love to see us chasing the easy wins.
That's of course good, but the reality is we're in a complicated trade-off space with "important", "urgent", "easy to do", "return on investment", "resource allocation" as axes. An example of the latter - ideally we'd put Walter on the more difficult tasks and others on the easy wins. Walter working on improving documentation might not be the best use of his time, although better documentation is an easy win. Andrei
Oct 08 2014
parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 8 October 2014 at 20:35:05 UTC, Andrei Alexandrescu 
wrote:
 On 10/8/14, 4:17 AM, Don wrote:
 On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu 
 wrote:

 And personally, I doubt that many companies would use D, even 
 if with
 perfect C++ interop, if the toolchain stayed at the current 
 level.
That speculation turns out to not be true for Facebook. My turn to speculate - many other companies have existing codebases in C++, so Sociomantic is "special".
Well, when IMHO, when discussing 'strategies', pretty everything it's a speculation... C++ interlope can also be attrattive when you need to start a new project, a you need C++ libs. But, the point it's that, again, IMHO, you tend to conflate Facebook need with D need (I know I'll receive pain back for this ;-). Sociomantic is not so special at all, about not having a previous C++ codebase: I personally know plenty of cases like that. But if D don't stop thinking about "new feature" and never terminate the previous plans, well, my speculations is that I donno about future adopters, but for sure it's scouring actual adopters; and the for sure it's based on what we feel here in SR Labs company.
 That's of course good, but the reality is we're in a 
 complicated trade-off space with "important", "urgent", "easy 
 to do", "return on investment", "resource allocation" as axes. 
 An example of the latter - ideally we'd put Walter on the more 
 difficult tasks and others on the easy wins. Walter working on 
 improving documentation might not be the best use of his time, 
 although better documentation is an easy win.
Well, I've read your and Walter comment on the multiple alias this PR, so good: but the point that it was the community that pushed both of you on that track, it's systematic about an attitude. And now, shields up, Ms Sulu! -- /Paolo
Oct 08 2014
prev sibling parent "yawniek" <dlang srtnwz.com> writes:
 Exactly. C++ support is of no interest at all, and GC is 
 something we contribute to, rather than something we expect 
 from the community.
 Interestingly we don't even care much about libraries, we've 
 done everything ourselves.

 So what do we care about? Mainly, we care about improving the 
 core product.

 In general I think that in D we have always suffered from 
 spreading ourselves too thin. We've always had a bunch of cool 
 new features that don't actually work properly. Always, the 
 focus shifts to something else, before the previous feature was 
 finished.

 At Sociomantic, we've been successful in our industry using 
 only the features of D1. We're restricted to using D's features 
 from 2007!! Feature-wise, practically nothing from the last 
 seven years has helped us!

 With something like C++ support, it's only going to win 
 companies over when it is essentially complete. That means that 
 working on it is a huge investment that doesn't start to pay 
 for itself for a very long time. So although it's a great goal, 
 with a huge potential payoff, I don't think that it should be 
 consuming a whole lot of energy right now.

 And personally, I doubt that many companies would use D, even 
 if with perfect C++ interop, if the toolchain stayed at the 
 current level.

 As I said in my Dconf 2013 talk -- I advocate a focus on Return 
 On Investment.
 I'd love to see us chasing the easy wins.
disclaimer: i am rather new to D and thus have a bit of a distant view. i think the above touches an important point. One thing GO does right is that they focused on feature rich stdlib/library ecosystem even though the language was very young. i'm coming from Ruby/Python and the reason i use those languages is that they have two things: a) they are fun to use (as andrei said in the floss interview: the creators had "taste"). b) huge set of libs that help me to get stuff done. now i think a) is fine, but with b) i am not sure if the strategy to get full C/C++ interop will not take too long and scare those people off that are not coming from C/C++. i think D is a fantastic tool to write expressive, fast and readable code. I don't need much more language features (again, look at GO...) but a solid foundation of libs to gain "competitive advantage" in my daily work.
Oct 11 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 6 October 2014 at 19:07:40 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 11:55 AM, H. S. Teoh via Digitalmars-d wrote:
 On Mon, Oct 06, 2014 at 06:13:41PM +0000, Dicebot via 
 Digitalmars-d wrote:
 On Monday, 6 October 2014 at 16:06:04 UTC, Andrei 
 Alexandrescu wrote:
[...]
 It would be terrific if Sociomantic would improve its 
 communication
 with the community about their experience with D and their 
 needs
 going forward.
How about someone starts paying attention to what Don posts? That could be an incredible start. I spend great deal of time both reading this NG (to be aware of what comes next) and writing (to express both personal and Sociomantic concerns) and have literally no idea what can be done to make communication more clear.
I don't remember who it was, but I'm pretty sure *somebody* at Sociomantic has stated clearly their request recently: Please break our code *now*, if it helps to fix language design issues, rather than later.
More particulars would be definitely welcome. I should add that Sociomantic has an interesting position: it's a 100% D shop so interoperability is not a concern for them, and they did their own GC so GC-related improvements are unlikely to make a large difference for them. So "C++ and GC" is likely not to be high priority for them. -- Andrei
Yes and this is exactly why I am that concerned about recent memory management policy thread. Don has already stated it in his talks but I will repeat important points: 1) We don't try to avoid GC in any way 2) However it is critical for performance to avoid creating garbage in a form of new GC roots 3) Worst part of Phobos is not GC allocations but specifically lot of temporarily garbage allocations This is a very simple issue that will prevent us from using and contributing to majority of Phobos even when D2 port is finished. Switch to input/output ranges as API fundamentals was supposed to fix it. Custom management policies as you propose won't fix it at all because garbage will still be there, simply managed in a different way. This is especially dissapointing because it was a first time when declared big effort seemed to help our needs but it got abandoned after very first attempts.
Oct 09 2014
next sibling parent reply "ixid" <adamsibson hotmail.com> writes:
Dicebot wrote:

 Switch to input/output ranges as API fundamentals was supposed 
 to fix it. Custom management policies as you propose won't fix 
 it at all because garbage will still be there, simply managed 
 in a different way.
Would it be impractical to support multiple approaches through templates? There seemed to be clear use cases where supplying memory to a function was a good idea and some where it wasn't.
Oct 09 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 14:38:08 UTC, ixid wrote:
 Dicebot wrote:

 Switch to input/output ranges as API fundamentals was supposed 
 to fix it. Custom management policies as you propose won't fix 
 it at all because garbage will still be there, simply managed 
 in a different way.
Would it be impractical to support multiple approaches through templates? There seemed to be clear use cases where supplying memory to a function was a good idea and some where it wasn't.
Multiple approaches for what? Adnrei proposal is not fundamentally incompatible with range-fication our using our array parameters, it simply moves the focus in a different direction (which is of no use to us). Looking at http://wiki.dlang.org/Stuff_in_Phobos_That_Generates_Garbage I also feel that ranges + reusable exceptions pools (needs refcounting for exceptions to implement) alone can take care of majority of issue, new proposal being more of a niche thing.
Oct 09 2014
parent reply "ixid" <adamsibson hotmail.com> writes:
On Thursday, 9 October 2014 at 14:47:00 UTC, Dicebot wrote:
 On Thursday, 9 October 2014 at 14:38:08 UTC, ixid wrote:
 Dicebot wrote:

 Switch to input/output ranges as API fundamentals was 
 supposed to fix it. Custom management policies as you propose 
 won't fix it at all because garbage will still be there, 
 simply managed in a different way.
Would it be impractical to support multiple approaches through templates? There seemed to be clear use cases where supplying memory to a function was a good idea and some where it wasn't.
Multiple approaches for what? Adnrei proposal is not fundamentally incompatible with range-fication our using our array parameters, it simply moves the focus in a different direction (which is of no use to us). Looking at http://wiki.dlang.org/Stuff_in_Phobos_That_Generates_Garbage I also feel that ranges + reusable exceptions pools (needs refcounting for exceptions to implement) alone can take care of majority of issue, new proposal being more of a niche thing.
Multiple approaches to how library functions can handle memory.
Oct 09 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 15:00:02 UTC, ixid wrote:
 Multiple approaches to how library functions can handle memory.
As long as it allows us avoid creating new GC roots and keep using GC for all allocations at the same time.
Oct 09 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdan.org> writes:
"Dicebot" <public dicebot.lv> wrote:
 On Thursday, 9 October 2014 at 15:00:02 UTC, ixid wrote:
 Multiple approaches to how library functions can handle memory.
As long as it allows us avoid creating new GC roots and keep using GC for all allocations at the same time.
To clarify: calling GC.free does remove the root, correct?
Oct 09 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 15:59:12 UTC, Andrei Alexandrescu 
wrote:
 "Dicebot" <public dicebot.lv> wrote:
 On Thursday, 9 October 2014 at 15:00:02 UTC, ixid wrote:
 Multiple approaches to how library functions can handle 
 memory.
As long as it allows us avoid creating new GC roots and keep using GC for all allocations at the same time.
To clarify: calling GC.free does remove the root, correct?
Not before it creates one. When I mean "avoid creating new GC roots" I mean "no GC activity at all other than extending existing chunks"
Oct 09 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/9/14, 9:00 AM, Dicebot wrote:
 On Thursday, 9 October 2014 at 15:59:12 UTC, Andrei Alexandrescu wrote:
 "Dicebot" <public dicebot.lv> wrote:
 On Thursday, 9 October 2014 at 15:00:02 UTC, ixid wrote:
 Multiple approaches to how library functions can handle memory.
As long as it allows us avoid creating new GC roots and keep using GC for all allocations at the same time.
To clarify: calling GC.free does remove the root, correct?
Not before it creates one. When I mean "avoid creating new GC roots" I mean "no GC activity at all other than extending existing chunks"
That's interesting. So GC.malloc followed by GC.free does actually affect things negatively? Also let's note that extending existing chunks may result in new allocations. Andrei
Oct 09 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 16:22:52 UTC, Andrei Alexandrescu 
wrote:
 To clarify: calling GC.free does remove the root, correct?
Not before it creates one. When I mean "avoid creating new GC roots" I mean "no GC activity at all other than extending existing chunks"
That's interesting. So GC.malloc followed by GC.free does actually affect things negatively?
Yes and quite notably so as GC.malloc can potentially trigger collection. With concurrent GC collection is not a disaster but it still affects the latency and should be avoided.
 Also let's note that extending existing chunks may result in 
 new allocations.
Yes. But as those chunks never get free'd it comes to O(1) allocation count over process lifetime with most allocations happening during program startup / warmup. Don has mentioned this as one of important points during his DConf 2014 talk but it probably didn't catch as much attention as it should.
Oct 09 2014
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Thursday, 9 October 2014 at 17:29:01 UTC, Dicebot wrote:
 On Thursday, 9 October 2014 at 16:22:52 UTC, Andrei 
 Alexandrescu wrote:
 To clarify: calling GC.free does remove the root, correct?
Not before it creates one. When I mean "avoid creating new GC roots" I mean "no GC activity at all other than extending existing chunks"
That's interesting. So GC.malloc followed by GC.free does actually affect things negatively?
Yes and quite notably so as GC.malloc can potentially trigger collection. With concurrent GC collection is not a disaster but it still affects the latency and should be avoided.
Is it just the potentially triggered collection, or is the actual allocation+deallocation too expensive? Because the effects of the former can of course be reduced greatly by tweaking the GC to not collect every time the heap needs to grow, at the cost of slightly more memory consumption. If it's the latter, that would indicate that maybe a different allocator with less overhead needs to be used.
 Also let's note that extending existing chunks may result in 
 new allocations.
Yes. But as those chunks never get free'd it comes to O(1) allocation count over process lifetime with most allocations happening during program startup / warmup.
Hmm... but shouldn't this just as well apply to the temporary allocations? After some warming up phase, the available space on the heap should be large enough that all further temporary allocations can be satisfied without growing the heap.
Oct 10 2014
next sibling parent "eles" <eles eles.com> writes:
On Friday, 10 October 2014 at 08:45:38 UTC, Marc Schütz wrote:
 On Thursday, 9 October 2014 at 17:29:01 UTC, Dicebot wrote:
 On Thursday, 9 October 2014 at 16:22:52 UTC, Andrei 
 Alexandrescu wrote:
I think the worst of D is summarized quite well by the following: http://forum.dlang.org/post/m15i9c$51b$1 digitalmars.com http://forum.dlang.org/post/54374DE0.6040405 digitalmars.com And that is: the focus is no longer to do the right things about D, but to make D do the right thing for some shadowy customers that don't even care enough to come here and grumble. Let's put D2 in maintenance mode and take on D3 with this nice motto, taken from here: http://forum.dlang.org/post/3af85684-7728-4165-acf9-520a240f65e0 me.com "why didn't work like that from the beginning?"
Oct 10 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Friday, 10 October 2014 at 08:45:38 UTC, Marc Schütz wrote:
 Yes and quite notably so as GC.malloc can potentially trigger 
 collection. With concurrent GC collection is not a disaster 
 but it still affects the latency and should be avoided.
Is it just the potentially triggered collection, or is the actual allocation+deallocation too expensive?
collection - for sure. allocation+deallocation - maybe, I have never measured it. It is surely slower than not allocating at all though.
 Because the effects of the former can of course be reduced 
 greatly by tweaking the GC to not collect every time the heap 
 needs to grow, at the cost of slightly more memory consumption.
This is likely to work better but still will be slower than our current approach because of tracking many small objects. Though of course it is just speculation until RC stuff is implemented for experiments.
 Also let's note that extending existing chunks may result in 
 new allocations.
Yes. But as those chunks never get free'd it comes to O(1) allocation count over process lifetime with most allocations happening during program startup / warmup.
Hmm... but shouldn't this just as well apply to the temporary allocations? After some warming up phase, the available space on the heap should be large enough that all further temporary allocations can be satisfied without growing the heap.
I am not speaking about O(1) internal heap increases but O(1) GC.malloc calls Typical pattern is to encapsulate "temporary" buffer with the algorithm in a single class object and never release it, reusing with new incoming requests (wiping the buffer data each time). Such buffer quickly gets to the point where it is large enough to contain all algorithm temporaries for a single request and never touches GC from there. In a well-written program which follows such pattern there are close to zero temporaries and GC only manages more persistent entities like cache elements.
Oct 10 2014
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Saturday, 11 October 2014 at 03:39:10 UTC, Dicebot wrote:
 I am not speaking about O(1) internal heap increases but O(1) 
 GC.malloc calls
 Typical pattern is to encapsulate "temporary" buffer with the 
 algorithm in a single class object and never release it, 
 reusing with new incoming requests (wiping the buffer data each 
 time). Such buffer quickly gets to the point where it is large 
 enough to contain all algorithm temporaries for a single 
 request and never touches GC from there.

 In a well-written program which follows such pattern there are 
 close to zero temporaries and GC only manages more persistent 
 entities like cache elements.
I understand that. My argument is that the same should apply to the entire heap: After you've allocated and released a certain amount of objects via GC.malloc() and GC.free(), the heap will have grown to a size large enough that any subsequent allocations of temporary objects can be satisfied from the existing heap without triggering a collection, so that only the overhead of actual allocation and freeing should be relevant.
Oct 11 2014
parent "Dicebot" <public dicebot.lv> writes:
On Saturday, 11 October 2014 at 09:26:28 UTC, Marc Schütz wrote:
 I understand that. My argument is that the same should apply to 
 the entire heap: After you've allocated and released a certain 
 amount of objects via GC.malloc() and GC.free(), the heap will 
 have grown to a size large enough that any subsequent 
 allocations of temporary objects can be satisfied from the 
 existing heap without triggering a collection, so that only the 
 overhead of actual allocation and freeing should be relevant.
But it still requires GC to check its pool state upon each request and make relevant adjustments for malloc/free combo. For something like a hundred of temporary allocations per request it accumulates into notable time (and milliseconds matter). In absence of collection it is cheap enough to not care about single malloc call on its own but still not cheap enough to ignore costs of many calls. You have interested me in doing some related benchmarks though.
Oct 11 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/9/14, 7:09 AM, Dicebot wrote:
 Yes and this is exactly why I am that concerned about recent memory
 management policy thread. Don has already stated it in his talks but I
 will repeat important points:

 1) We don't try to avoid GC in any way
 2) However it is critical for performance to avoid creating garbage in a
 form of new GC roots
 3) Worst part of Phobos is not GC allocations but specifically lot of
 temporarily garbage allocations
What would be a few good examples of (3)? Thanks.
 This is a very simple issue that will prevent us from using and
 contributing to majority of Phobos even when D2 port is finished.

 Switch to input/output ranges as API fundamentals was supposed to fix
 it.
Unfortunately it doesn't. RC does. Lazy computation relies on escaping ranges all over the place (i.e. as fields inside structs implementing the lazy computation). If there's no way to track those many tidbits, resources cannot be reclaimed timely. Walter and I have only recently achieved clarity on this.
 Custom management policies as you propose won't fix it at all
 because garbage will still be there, simply managed in a different way.
I'm not sure I understand this.
 This is especially dissapointing because it was a first time when
 declared big effort seemed to help our needs but it got abandoned after
 very first attempts.
It hasn't gotten abandoned; it's on the back burner. Lazification is a good thing to do, but won't get us closer to taking the garbage out. Andrei
Oct 09 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 15:32:06 UTC, Andrei Alexandrescu 
wrote:
 On 10/9/14, 7:09 AM, Dicebot wrote:
 Yes and this is exactly why I am that concerned about recent 
 memory
 management policy thread. Don has already stated it in his 
 talks but I
 will repeat important points:

 1) We don't try to avoid GC in any way
 2) However it is critical for performance to avoid creating 
 garbage in a
 form of new GC roots
 3) Worst part of Phobos is not GC allocations but specifically 
 lot of
 temporarily garbage allocations
What would be a few good examples of (3)? Thanks.
Infamous setExtensions (https://github.com/D-Programming-Language/phobos/blob/master/std/path.d#L843) immediately comes to mind. Usage of concatenation operators there allocates a new GC root for a new string.
 This is a very simple issue that will prevent us from using and
 contributing to majority of Phobos even when D2 port is 
 finished.

 Switch to input/output ranges as API fundamentals was supposed 
 to fix
 it.
Unfortunately it doesn't. RC does. Lazy computation relies on escaping ranges all over the place (i.e. as fields inside structs implementing the lazy computation). If there's no way to track those many tidbits, resources cannot be reclaimed timely.
Are you trying to tell me programs I work with do not exist? :) Usage of output range is simply a generalization of out array parameter used in both Tango and our code. It is _already_ proved to work for our cases. Usage of input ranges is less important but it fits existing Phobos style better. We also don't usually reclaim resources. Out application usually work by growing constant amount of buffers to the point where they can handle routine workload and staying there will almost 0 GC activity. I don't understand statement about storing the ranges. The way I have it in mind ranges are tool for algorithm composition. Once you want to store it as a struct field you force range evaluation via output range and store resulting allocated buffer. In user code.
 Custom management policies as you propose won't fix it at all
 because garbage will still be there, simply managed in a 
 different way.
I'm not sure I understand this.
Typical pattern from existing D1 code: // bad auto foo(char[] arg) { return arg ~ "aaa"; } vs // good auto foo(char[] arg, ref char[] result) { result.length = arg.length +3; // won't allocate if already has capacity result[0 .. arg.length] = arg[]; result[arg.length .. arg.length + 3] = "aaa"[]; } It doesn't matter if first snippet allocates GC root or ref-counted root. We need the version that does not allocate new root at all (second snippet).
Oct 09 2014
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 09 Oct 2014 15:57:15 +0000
schrieb "Dicebot" <public dicebot.lv>:

 Unfortunately it doesn't. RC does. Lazy computation relies on 
 escaping ranges all over the place (i.e. as fields inside 
 structs implementing the lazy computation). If there's no way 
 to track those many tidbits, resources cannot be reclaimed 
 timely.  
Are you trying to tell me programs I work with do not exist? :) Usage of output range is simply a generalization of out array parameter used in both Tango and our code. It is _already_ proved to work for our cases. Usage of input ranges is less important but it fits existing Phobos style better. We also don't usually reclaim resources. Out application usually work by growing constant amount of buffers to the point where they can handle routine workload and staying there will almost 0 GC activity. I don't understand statement about storing the ranges. The way I have it in mind ranges are tool for algorithm composition. Once you want to store it as a struct field you force range evaluation via output range and store resulting allocated buffer. In user code.
I think Andrei means at some point you have to 'store the range' or create an (often dynamic) array from the range and then you still need some sort of memory management. Ultimately you still need some sort of memory management there and RC will be nice for that if you don't want to use the GC. You can also store the range to a stack buffer or use manual memory management. But ranges alone do not solve this problem and forcing everyone to do manual memory management is not a good replacement for GC, so we need the discussed RC scheme. At least this is how I understood Andrei's point.
Oct 09 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/9/14, 9:27 AM, Johannes Pfau wrote:
 Am Thu, 09 Oct 2014 15:57:15 +0000
 schrieb "Dicebot" <public dicebot.lv>:

 Unfortunately it doesn't. RC does. Lazy computation relies on
 escaping ranges all over the place (i.e. as fields inside
 structs implementing the lazy computation). If there's no way
 to track those many tidbits, resources cannot be reclaimed
 timely.
Are you trying to tell me programs I work with do not exist? :) Usage of output range is simply a generalization of out array parameter used in both Tango and our code. It is _already_ proved to work for our cases. Usage of input ranges is less important but it fits existing Phobos style better. We also don't usually reclaim resources. Out application usually work by growing constant amount of buffers to the point where they can handle routine workload and staying there will almost 0 GC activity. I don't understand statement about storing the ranges. The way I have it in mind ranges are tool for algorithm composition. Once you want to store it as a struct field you force range evaluation via output range and store resulting allocated buffer. In user code.
I think Andrei means at some point you have to 'store the range' or create an (often dynamic) array from the range and then you still need some sort of memory management. Ultimately you still need some sort of memory management there and RC will be nice for that if you don't want to use the GC. You can also store the range to a stack buffer or use manual memory management. But ranges alone do not solve this problem and forcing everyone to do manual memory management is not a good replacement for GC, so we need the discussed RC scheme. At least this is how I understood Andrei's point.
Yes, that's accurate. Thanks! -- Andrei
Oct 09 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/9/14, 8:57 AM, Dicebot wrote:
 On Thursday, 9 October 2014 at 15:32:06 UTC, Andrei Alexandrescu wrote:
 Unfortunately it doesn't. RC does. Lazy computation relies on escaping
 ranges all over the place (i.e. as fields inside structs implementing
 the lazy computation). If there's no way to track those many tidbits,
 resources cannot be reclaimed timely.
Are you trying to tell me programs I work with do not exist? :)
In all likelihood it's a small misunderstanding.
 Usage of output range is simply a generalization of out array parameter
 used in both Tango and our code. It is _already_ proved to work for our
 cases.
Got it. Output ranges work great with unstructured/linear outputs - preallocate an array, fill it with stuff, all's nice save for the occasional reallocation when things don't fit etc. With structured outputs there are a lot more issues to address: one can think of a JSONObject as an output range with put() but that's only moving the real issues around. How would the JSONObject allocate memory internally, give it out to its own users, and dispose of it timely, all in good safety? That's why JSON tokenization is relatively easy to do lazily/with output ranges, but full-blown parsing becomes a different proposition. Andrei
Oct 09 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 16:41:22 UTC, Andrei Alexandrescu 
wrote:
 Usage of output range is simply a generalization of out array 
 parameter
 used in both Tango and our code. It is _already_ proved to 
 work for our
 cases.
Got it. Output ranges work great with unstructured/linear outputs - preallocate an array, fill it with stuff, all's nice save for the occasional reallocation when things don't fit etc. With structured outputs there are a lot more issues to address: one can think of a JSONObject as an output range with put() but that's only moving the real issues around. How would the JSONObject allocate memory internally, give it out to its own users, and dispose of it timely, all in good safety? That's why JSON tokenization is relatively easy to do lazily/with output ranges, but full-blown parsing becomes a different proposition.
This reminds me of our custom binary serialization utilities, intentionally designed in a way that deserialization can happen in-place using same contiguous data buffer as serialized chunk. It essentially stores all indirections in the same buffer one after other. Implementation is far from being trivial and adds certain usage restrictions but it allows for the same extremely performant linear buffer approach even with non-linear data structures. In general I am not trying to argue that range-based approach is a silver bullet though. It isn't and stuff like ref counting will be necessary at least in some domains. What I am arguing is that it won't solve _our_ issues with Phobos (contrary to previous range-based proposal) and this is the reason for being dissapointed. Twice as so because you suggested to close PR that turns setExtension into range because of your new proposal (which implies that efforts can't co-exist)
Oct 09 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 09/10/14 18:41, Andrei Alexandrescu wrote:

 With structured outputs there are a lot more issues to address: one can
 think of a JSONObject as an output range with put() but that's only
 moving the real issues around. How would the JSONObject allocate memory
 internally, give it out to its own users, and dispose of it timely, all
 in good safety?
The XML DOM module in Tango uses, if I recall correctly, a free list internally for the nodes. It will reuse the nodes, if you want to keep some information you need to copy it yourself. -- /Jacob Carlborg
Oct 09 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/6/2014 11:13 AM, Dicebot wrote:
 Especially because
 you have stated that previous proposal (range-fication) which did fix the issue
 _for me_ is not on the table anymore.
I think it's more stalled because of the setExtension controversy.
 How about someone starts paying attention to what Don posts? That could be an
 incredible start.
I don't always agree with Don, but he's almost always right and his last post was almost entirely implemented.
Oct 08 2014
parent "Don" <x nospam.com> writes:
On Wednesday, 8 October 2014 at 21:07:24 UTC, Walter Bright wrote:
 On 10/6/2014 11:13 AM, Dicebot wrote:
 Especially because
 you have stated that previous proposal (range-fication) which 
 did fix the issue
 _for me_ is not on the table anymore.
I think it's more stalled because of the setExtension controversy.
 How about someone starts paying attention to what Don posts? 
 That could be an
 incredible start.
I don't always agree with Don, but he's almost always right and his last post was almost entirely implemented.
Wow, thanks, Walter! I'm wrong pretty regularly, though. A reasonable rule of thumb is to ask Daniel Murphy, aka yebblies. If he disagrees with me, and I can't change his mind within 30 minutes, you can be certain that I'm wrong. <g>
Oct 09 2014
prev sibling next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:

 Right now I have no idea where the development is headed and 
 what to expect from next few releases. I am not speaking about 
 wiki.dlang.org/Agenda but about bigger picture. Unexpected 
 focus on C++ support, thread about killing auto-decoding, 
 recent ref counting proposal
Just to add more salt: http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx Raymond Chen: " When you ask somebody what garbage collection is, the answer you get is probably going to be something along the lines of "Garbage collection is when the operating environment automatically reclaims memory that is no longer being used by the program. It does this by tracing memory starting from roots to identify which objects are accessible." This description confuses the mechanism with the goal. It's like saying the job of a firefighter is "driving a red truck and spraying water." That's a description of what a firefighter does, but it misses the point of the job (namely, putting out fires and, more generally, fire safety). Garbage collection is simulating a computer with an infinite amount of memory. The rest is mechanism. And naturally, the mechanism is "reclaiming memory that the program wouldn't notice went missing." It's one giant application of the as-if rule." Interesting, in the comments, the distinction that is made between finalizers and destructors, even if they happen to have For example here: I feel difficult to swallow the fact that D classes do not lend themselves to RAII. While I could accept the memory management could be left outside RAII, running destructors (or disposers) deterministically is a must. I particularily find bad that D recommends using structs to free resources because the destructor of those is run automatically. Just look at this example: http://dlang.org/cpptod.html#raii struct File { Handle h; ~this() { h.release(); } } void test() { if (...) { auto f = File(); ... } // f.~this() gets run at closing brace, even if // scope was exited via a thrown exception } Even if C++ structs are almost the same as classes, the logical solit between the two is: structs are DATA, classes are BEHAVIOR. I will not get my head around the fact that I will *recommend* putting methods in a struct.
Oct 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:

 Right now I have no idea where the development is headed and what to
 expect from next few releases. I am not speaking about
 wiki.dlang.org/Agenda but about bigger picture. Unexpected focus on
 C++ support, thread about killing auto-decoding, recent ref counting
 proposal
Just to add more salt: http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx Raymond Chen: " When you ask somebody what garbage collection is, the answer you get is probably going to be something along the lines of "Garbage collection is when the operating environment automatically reclaims memory that is no longer being used by the program. It does this by tracing memory starting from roots to identify which objects are accessible." This description confuses the mechanism with the goal. It's like saying the job of a firefighter is "driving a red truck and spraying water." That's a description of what a firefighter does, but it misses the point of the job (namely, putting out fires and, more generally, fire safety). Garbage collection is simulating a computer with an infinite amount of memory. The rest is mechanism. And naturally, the mechanism is "reclaiming memory that the program wouldn't notice went missing." It's one giant application of the as-if rule." Interesting, in the comments, the distinction that is made between finalizers and destructors, even if they happen to have the same syntax, For example here: I feel difficult to swallow the fact that D classes do not lend themselves to RAII. While I could accept the memory management could be left outside RAII, running destructors (or disposers) deterministically is a must. I particularily find bad that D recommends using structs to free resources because the destructor of those is run automatically. Just look at this example: http://dlang.org/cpptod.html#raii struct File { Handle h; ~this() { h.release(); } } void test() { if (...) { auto f = File(); ... } // f.~this() gets run at closing brace, even if // scope was exited via a thrown exception } Even if C++ structs are almost the same as classes, the logical solit between the two is: structs are DATA, classes are BEHAVIOR. I will not get my head around the fact that I will *recommend* putting methods in a struct.
The main distinction between structs and classes in D is the former are monomorphic value types and the later are polymorphic reference types. -- Andrei
Oct 05 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Monday, 6 October 2014 at 03:48:49 UTC, Andrei Alexandrescu 
wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 The main distinction between structs and classes in D is the 
 former are monomorphic value types and the later are 
 polymorphic reference types. -- Andrei
Why hack them with scoped? The need exists, since you provide a hack for it. Reference classes in C++ are polymorphic & reference, but their destructor/disposer gets called. There is a delete that triggers that or a smart pointer. I don't care if the delete or the destructor really frees the memory, but I would like it to get called, to release other resources that the object might have locked and to mark the object as "invalid". Later access to it shall triger an exception: "invalidObject". Call it dispose if you like, because delete is too much like freeing memory. Is there an intermediate type between structs and classes?
Oct 05 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Monday, 6 October 2014 at 06:23:42 UTC, eles wrote:
 On Monday, 6 October 2014 at 03:48:49 UTC, Andrei Alexandrescu 
 wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 The main distinction between structs and classes in D is the 
 former are monomorphic value types and the later are 
 polymorphic reference types. -- Andrei
Why hack them with scoped? The need exists, since you provide a hack for it. Reference classes in C++ are polymorphic & reference, but their destructor/disposer gets called. There is a delete that triggers that or a smart pointer. I don't care if the delete or the destructor really frees the memory, but I would like it to get called, to release other resources that the object might have locked and to mark the object as "invalid". Later access to it shall triger an exception: "invalidObject". Call it dispose if you like, because delete is too much like freeing memory. Is there an intermediate type between structs and classes?
Form that page again: "I have found (dislcaimer: this is my experience, your mileage will vary) that because 90% of the time you don't need to worry "using" the remaining 10%. In C++ you always need to worry about it, which makes it real easy to remember that when obtaining a resource make sure you have taken care of its release as well. (In essence, make sure it is stored in something whose destructor will free it). I have found this pattern a lot harder to follow well. I just personally find that the whole "garbage collector saves you" aspect that is pitched in every intro to the language I have encountered more of a trap than a salvation."
Oct 05 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Monday, 6 October 2014 at 06:28:02 UTC, eles wrote:
 On Monday, 6 October 2014 at 06:23:42 UTC, eles wrote:
 On Monday, 6 October 2014 at 03:48:49 UTC, Andrei Alexandrescu 
 wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
I like the safety that a GC guarantees, but is a too big price to be paid for that...
Oct 05 2014
next sibling parent reply "eles" <eles eles.com> writes:
On Monday, 6 October 2014 at 06:28:58 UTC, eles wrote:
 On Monday, 6 October 2014 at 06:28:02 UTC, eles wrote:
 On Monday, 6 October 2014 at 06:23:42 UTC, eles wrote:
 On Monday, 6 October 2014 at 03:48:49 UTC, Andrei 
 Alexandrescu wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
I like the safety that a GC guarantees, but is a too big price to be paid for that...
Just look at this abomination from here: http://agilology.blogspot.com/2009/01/why-dispose-is-necessary-and-other.html sqlConnection.Close(); sqlConnection.Dispose(); sqlConnection = null; Is this your idea about releasing a resource? Why is this better than writing delete/dispose sqlConnection? If you ask to use structs for RAII, I am afraid that you will receive a DFront proposal.
Oct 06 2014
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 06.10.2014 10:12, schrieb eles:
 On Monday, 6 October 2014 at 06:28:58 UTC, eles wrote:
 On Monday, 6 October 2014 at 06:28:02 UTC, eles wrote:
 On Monday, 6 October 2014 at 06:23:42 UTC, eles wrote:
 On Monday, 6 October 2014 at 03:48:49 UTC, Andrei Alexandrescu wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
I like the safety that a GC guarantees, but is a too big price to be paid for that...
Just look at this abomination from here: http://agilology.blogspot.com/2009/01/why-dispose-is-necessary-and-other.html sqlConnection.Close(); sqlConnection.Dispose(); sqlConnection = null; Is this your idea about releasing a resource? Why is this better than writing delete/dispose sqlConnection? If you ask to use structs for RAII, I am afraid that you will receive a DFront proposal.
This abomination tends to be written by developers that don't care to learn how to use properly their tools. It is quite easy to just use "using" on every IDisposable resource. As for setting something to null just to let the GC know about it, a sign of premature optimization and a sign of not neither knowing how a GC works nor how to use a memory profiler. -- Paulo
Oct 06 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 October 2014 at 06:28:58 UTC, eles wrote:
 I like the safety that a GC guarantees, but is a too big price 
 to
 be paid for that...
What if you only had precise GC on class objects and nothing else? That believe could be done in a performant manner.
Oct 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 11:23 PM, eles wrote:
 On Monday, 6 October 2014 at 03:48:49 UTC, Andrei Alexandrescu wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 The main distinction between structs and classes in D is the former
 are monomorphic value types and the later are polymorphic reference
 types. -- Andrei
Why hack them with scoped? The need exists, since you provide a hack for it. Reference classes in C++ are polymorphic & reference, but their destructor/disposer gets called.
It doesn't because they need to be allocated dynamically. That's why there's a need for the likes of unique_ptr and shared_ptr in C++.
 There is a delete that triggers that or a smart pointer. I don't care if
 the delete or the destructor really frees the memory, but I would like
 it to get called, to release other resources that the object might have
 locked and to mark the object as "invalid". Later access to it shall
 triger an exception: "invalidObject".

 Call it dispose if you like, because delete is too much like freeing
 memory.

 Is there an intermediate type between structs and classes?
The intermediate type between struct and class is struct. Andrei
Oct 06 2014
parent "eles" <eles eles.com> writes:
On Monday, 6 October 2014 at 13:42:35 UTC, Andrei Alexandrescu 
wrote:
 On 10/5/14, 11:23 PM, eles wrote:
 On Monday, 6 October 2014 at 03:48:49 UTC, Andrei Alexandrescu 
 wrote:
 On 10/5/14, 3:08 PM, eles wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 It doesn't because they need to be allocated dynamically. 
 That's why there's a need for the likes of unique_ptr and 
 shared_ptr in C++.
Yes, or that delete. And AFAIS not only C++ needs unique_ptr and shared_ptr, this ARC thing is the same in D.
 The intermediate type between struct and class is struct.
D with classes, anyone?
Oct 06 2014
prev sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 TDPL was an absolutely awesome book because it expained "why?" 
 as opposed to "how?". Such insight into language authors 
 rationale is incredibly helpful for long-term contribution. 
 Unfortunately, it didn't cover all parts of the language and 
 many new things has been added since it was out.
I would also add that it's scaring not having seen a single comment of Andrej here: https://github.com/D-Programming-Language/dmd/pull/3998
 Right now I have no idea where the development is headed and 
 what to expect from next few releases. I am not speaking about 
 wiki.dlang.org/Agenda but about bigger picture. Unexpected 
 focus on C++ support, thread about killing auto-decoding, 
 recent ref counting proposal - all this stuff comes from 
 language authors but does not feel like a strategic additions. 
 It feels like yet another random contribution, no different 
 from contribution/idea of any other D user.
+1 on all.
 I am disturbed when Andrei comes with proposal that possibly 
 affects whole damn Phobos (memeory management flags) and asks 
 to trust his experience and authority on topic while rejecting 
 patterns that are confirmed to be working well in real 
 production projects. Don't get me wrong, I don't doubt Andrei 
 authority on memory management topic (it is miles ahead of mine 
 at the very least) but I simply don't believe any living person 
 in this world can design such big change from scratch without 
 some extended feedback from real deployed projects.
+1000 --- /Paolo
Oct 06 2014
next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 06 Oct 2014 07:44:56 +0000
Paolo Invernizzi via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I would also add that it's scaring not having seen a single=20
 comment of Andrej here:
 https://github.com/D-Programming-Language/dmd/pull/3998
it's not about c++ interop or gc, so it can wait. existing D users will not run away, they are used to be second-class citizens.
Oct 06 2014
parent reply "Meta" <jared771 gmail.com> writes:
On Monday, 6 October 2014 at 07:51:41 UTC, ketmar via 
Digitalmars-d wrote:
 On Mon, 06 Oct 2014 07:44:56 +0000
 Paolo Invernizzi via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:

 I would also add that it's scaring not having seen a single 
 comment of Andrej here:
 https://github.com/D-Programming-Language/dmd/pull/3998
it's not about c++ interop or gc, so it can wait. existing D users will not run away, they are used to be second-class citizens.
That's a bit unfair of you.
Oct 06 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 06 Oct 2014 12:48:28 +0000
Meta via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Monday, 6 October 2014 at 07:51:41 UTC, ketmar via=20
 Digitalmars-d wrote:
 On Mon, 06 Oct 2014 07:44:56 +0000
 Paolo Invernizzi via Digitalmars-d=20
 <digitalmars-d puremagic.com> wrote:

 I would also add that it's scaring not having seen a single=20
 comment of Andrej here:
 https://github.com/D-Programming-Language/dmd/pull/3998
it's not about c++ interop or gc, so it can wait. existing D=20 users will not run away, they are used to be second-class citizens.
=20 That's a bit unfair of you.
that's what i see. even when people praying for breaking their code to fix some quirks in language, the answer is "NO". why no? 'cause some hermit living in far outlands may wrote some code years ago and that code will break and hermit will be unhappy. unhappiness of active users doesn't matter. and about writing autofixing tool... silence is the answer. "so will you accept that changes if we'll write dfix tool to automatically fixing source code?" silence. "play your little games, we don't care, as we don't plan to change that anyway, regardless of the tool availability". Walter once said that he is against "dfix", and nothing was changed since. "ah, maybe, we aren't interested..." there is little motivation of doing work on "dfix" if it's not endorsed by leading language developers. and now for multiple "alias this"... as you can see this will not help c++ interop, and it will not help gc, so it can lay rotting on github. not a word, not even "will not accept this" or "it's interesting, please keep it up-to-date while you can, we are little busy right now, but will look at it later". second-class citizens will not run away.
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 6:18 AM, ketmar via Digitalmars-d wrote:
 and now for multiple "alias this"... as you can see this will not help
 c++ interop, and it will not help gc, so it can lay rotting on github.
 not a word, not even "will not accept this" or "it's interesting, please
 keep it up-to-date while you can, we are little busy right now, but
 will look at it later". second-class citizens will not run away.
We will accept multiple "alias this". -- Andrei
Oct 06 2014
parent reply "eles" <eles eles.com> writes:
On Monday, 6 October 2014 at 13:55:05 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 6:18 AM, ketmar via Digitalmars-d wrote:
 We will accept multiple "alias this". -- Andrei
============================================================= IgorStepanov commented 6 days ago Please, someone, add label "Needs Approval" to this PR. We need discuss a conflict resolving, and determine right algorithm, if implemented algorithm isn't right. Thanks.  yebblies added Enhancement Needs Approval labels 6 days ago ============================================================= Please grand approval there.
Oct 06 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 9:14 AM, eles wrote:
 On Monday, 6 October 2014 at 13:55:05 UTC, Andrei Alexandrescu wrote:
 On 10/6/14, 6:18 AM, ketmar via Digitalmars-d wrote:
 We will accept multiple "alias this". -- Andrei
============================================================= IgorStepanov commented 6 days ago Please, someone, add label "Needs Approval" to this PR. We need discuss a conflict resolving, and determine right algorithm, if implemented algorithm isn't right. Thanks.  yebblies added Enhancement Needs Approval labels 6 days ago ============================================================= Please grand approval there.
Will do, thanks. -- Andrei
Oct 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 TDPL was an absolutely awesome book because it expained "why?" as
 opposed to "how?". Such insight into language authors rationale is
 incredibly helpful for long-term contribution. Unfortunately, it
 didn't cover all parts of the language and many new things has been
 added since it was out.
I would also add that it's scaring not having seen a single comment of Andrej here: https://github.com/D-Programming-Language/dmd/pull/3998
I did comment in this group. -- Andrei
Oct 06 2014
next sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Monday, 6 October 2014 at 13:49:15 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 TDPL was an absolutely awesome book because it expained 
 "why?" as
 opposed to "how?". Such insight into language authors 
 rationale is
 incredibly helpful for long-term contribution. Unfortunately, 
 it
 didn't cover all parts of the language and many new things 
 has been
 added since it was out.
I would also add that it's scaring not having seen a single comment of Andrej here: https://github.com/D-Programming-Language/dmd/pull/3998
I did comment in this group. -- Andrei
I'm missing something? If you are _only_ meaning that one [1] I keep being scared ;-) No pun intended, really. http://forum.dlang.org/thread/mxpfzghydhirdtltmmvo forum.dlang.org?page=3#post-lvhoic:2421o4:241:40digitalmars.com --- /Paolo
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 7:05 AM, Paolo Invernizzi wrote:
 On Monday, 6 October 2014 at 13:49:15 UTC, Andrei Alexandrescu wrote:
 On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 TDPL was an absolutely awesome book because it expained "why?" as
 opposed to "how?". Such insight into language authors rationale is
 incredibly helpful for long-term contribution. Unfortunately, it
 didn't cover all parts of the language and many new things has been
 added since it was out.
I would also add that it's scaring not having seen a single comment of Andrej here: https://github.com/D-Programming-Language/dmd/pull/3998
I did comment in this group. -- Andrei
I'm missing something? If you are _only_ meaning that one [1] I keep being scared ;-) No pun intended, really. http://forum.dlang.org/thread/mxpfzghydhirdtltmmvo forum.dlang.org?page=3#post-lvhoic:2421o4:241:40digitalmars.com
I understand the necessity for further scrutiny of that work. Even before that I owe Sönke Ludwig a review for std.data.json. There's a large list of things I need to do at work, mostly D-related, not all of which I am not at liberty to make public. On top of that I have of course other obligations to tend to. To interpret my lack of time politically is really amusing. You guys have too much time on your hands :o). Andrei
Oct 06 2014
parent reply "eles" <eles eles.com> writes:
On Monday, 6 October 2014 at 14:53:23 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 7:05 AM, Paolo Invernizzi wrote:
 On Monday, 6 October 2014 at 13:49:15 UTC, Andrei Alexandrescu 
 wrote:
 On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:

 To interpret my lack of time politically is really amusing. You 
 guys have too much time on your hands :o).
At least give such an explanation from time to time. Silence is the worst.
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 9:16 AM, eles wrote:
 On Monday, 6 October 2014 at 14:53:23 UTC, Andrei Alexandrescu wrote:
 On 10/6/14, 7:05 AM, Paolo Invernizzi wrote:
 On Monday, 6 October 2014 at 13:49:15 UTC, Andrei Alexandrescu wrote:
 On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:

 To interpret my lack of time politically is really amusing. You guys
 have too much time on your hands :o).
At least give such an explanation from time to time. Silence is the worst.
Wait, are we too active in the forums or too silent? -- Andrei
Oct 06 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Oct 06, 2014 at 09:39:44AM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 10/6/14, 9:16 AM, eles wrote:
On Monday, 6 October 2014 at 14:53:23 UTC, Andrei Alexandrescu wrote:
On 10/6/14, 7:05 AM, Paolo Invernizzi wrote:
On Monday, 6 October 2014 at 13:49:15 UTC, Andrei Alexandrescu wrote:
On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:

To interpret my lack of time politically is really amusing. You guys
have too much time on your hands :o).
At least give such an explanation from time to time. Silence is the worst.
Wait, are we too active in the forums or too silent? -- Andrei
It would be *very* nice if once in a while (say once a week, or once a month) you and/or Walter can do a little write-up about the current status of things. Say a list of top 5 projects currently being worked on, a list of the top 5 current priorities, a short blurb about "progress this past week/month" (which could be as simple as "we've been swamped with fixing regressions, haven't been able to work on anything else", or "Facebook has me on a short leash, I haven't been able to work on D", etc.). This should be in its own thread, titled something like "Weekly [or monthly] status update", not buried underneath mountains of posts in one of our infamous interminable threads about some controversial topic, like to autodecode or not to autodecode. Of course, who am I to tell you what to do... but IMO a periodical high-level status update like the above would go a *long* way in dispelling complaints of "lack of direction" or "unclear/unknown priorities". It doesn't have to be long, either. Even a 1 page (OK, OK, *half* a page), bullet-point post is good enough. T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Oct 06 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 9:58 AM, H. S. Teoh via Digitalmars-d wrote:
 It would be*very*  nice if once in a while (say once a week, or once a
 month) you and/or Walter can do a little write-up about the current
 status of things. Say a list of top 5 projects currently being worked
 on, a list of the top 5 current priorities, a short blurb about
 "progress this past week/month" (which could be as simple as "we've been
 swamped with fixing regressions, haven't been able to work on anything
 else", or "Facebook has me on a short leash, I haven't been able to work
 on D", etc.). This should be in its own thread, titled something like
 "Weekly [or monthly] status update", not buried underneath mountains of
 posts in one of our infamous interminable threads about some
 controversial topic, like to autodecode or not to autodecode.

 Of course, who am I to tell you what to do... but IMO a periodical
 high-level status update like the above would go a*long*  way in
 dispelling complaints of "lack of direction" or "unclear/unknown
 priorities". It doesn't have to be long, either. Even a 1 page (OK, OK,
 *half*  a page), bullet-point post is good enough.
That's a really nice idea. -- Andrei
Oct 06 2014
prev sibling parent "eles" <eles eles.com> writes:
On Monday, 6 October 2014 at 13:49:15 UTC, Andrei Alexandrescu 
wrote:
 On 10/6/14, 12:44 AM, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:
 I did comment in this group. -- Andrei
========================================== IgorStepanov commented 17 days ago Ping =========================================== quickfur commented 14 days ago Wow. I have been waiting for this feature for a long time! Can't wait to see this merged. Ping WalterBright ? ========================================== IgorStepanov commented 13 days ago andralex Ping. Please comment the tests and conflict resolving semantic. =========================================== IgorStepanov commented 8 days ago andralex ping ==========================================
Oct 06 2014
prev sibling next sibling parent reply "Nicolas F." <ddev fratti.ch> writes:
On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote:

 What do you think are the worst parts of D?
The fact that its community, when faced with the question "What do you think are the worst parts of D?", will readily have a 35 page verbal fistfight over what the worst parts of D are. Don't get me wrong, I'm happy that discussion is happening about such things, but I think it may be better to have it in a more structured manner in the future, and with a little less emotional investment. That being said, the second worst part of D for me is the current state of documentation, which is something that is often mentioned. I'd be happy to take part in a "docs initiative" where some of us sit together and talk about how we can improve the documentation of the language, collect input from outside, and then implement the changes that are found to be necessary. This would make it easier for people who don't wish to set up the entire build environment for the documentation on their side to participate in documentation adjustments by giving feedback, while a somewhat dedicated group of people then focus on making decisions reality.
Oct 06 2014
parent reply "Joakim" <dlang joakim.fea.st> writes:
On Monday, 6 October 2014 at 15:13:59 UTC, Nicolas F. wrote:
 On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja 
 wrote:

 What do you think are the worst parts of D?
The fact that its community, when faced with the question "What do you think are the worst parts of D?", will readily have a 35 page verbal fistfight over what the worst parts of D are. Don't get me wrong, I'm happy that discussion is happening about such things, but I think it may be better to have it in a more structured manner in the future, and with a little less emotional investment.
Heh, I think such self-criticism by the community is great, for example, I loved that I recently stumbled across this page on the wiki: http://wiki.dlang.org/Language_issues How many other languages can boast such a page of problems on their own wiki? :) Thanks to Vlad and Mike for writing it. People in this thread are emotional because they care: I don't think it's gone overboard given the real concerns they're stating. When the D community starts hiding its problems and stops critiquing its own efforts, sometimes passionately if that's what you truly feel, is when it starts going downhill.
Oct 06 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 06 Oct 2014 15:22:17 +0000
Joakim via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 People in this thread are emotional because they care
yes. i don't think that anybody (including me ;-) wants to directly insult someone here. D is good, that's why "not-so-good" features are so annoying that we are writing such emotional postings.
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 8:35 AM, ketmar via Digitalmars-d wrote:
 On Mon, 06 Oct 2014 15:22:17 +0000
 Joakim via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 People in this thread are emotional because they care
yes. i don't think that anybody (including me ;-) wants to directly insult someone here.
I appeal to you and others to keep language and attitude in check. We all are well intended here, all we need to do is convert heat into work, which should be possible per the first and second principle of thermodynamics. All we need is some cooling :o). Andrei
Oct 06 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 06 Oct 2014 08:39:54 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 I appeal to you and others to keep language and attitude in check.
i'm doing my best, rewriting my posts at least three times before sending. i bet noone wants to read the first variants. ;-) but this thread is a good place to show some emotions. i believe that we need such "emotional ranting" threads, so people can scream here and then calm down. sure it's very flammable; we must keep fire extinguishers at hand. ;-)
Oct 06 2014
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Tofu Ninja:

 What do you think are the worst parts of D?
There are several problems in D/Phobos, but I think the biggest one is the development process, that is currently toxic: http://forum.dlang.org/thread/54374DE0.6040405 digitalmars.com In my opinion an Open Source language with such development problem goes nowhere, so I think this needs to be improved. There are several possible ways to improve this situation, but perhaps the may one that can work is: Walter has to delegate another slice of its authority to the community of core dmd developers. This will cause some problems, perhaps an increase in confusion and pull reversions, Walter may lose a bit of his grasp of the dmd codebase (but this can be avoided if he reads the pull requests code), but I think overall the advantages in the long term are going to be bigger than the disadvantages. Regarding the product (and not the process), the design of a principled, sound, and flexible memory ownership system is an important part of D design to work on in the following months. It's going to add some complexity to D, but this is not the kind of complexity that kills a language, because if it's well designed it's going to give easy to read compile-time errors, it's not going to limit too much the variety of things that are currently allowed in D, and it's going to statically remove a significant amount of run-time bugs that are currently easy to do in D. The complexity that causes problems is not the one that requires you to understand a part of a well designed type system: it's the pile of arbitrary special cases and special cases of special cases you see in C++. D considers safety and correctness very important points of its design, and I agree with the importance of such points. Bye, bearophile
Oct 10 2014