www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Why I chose D over Ada and Eiffel

reply "Ramon" <spam thanks.no> writes:
Frontup warning: Some of what I'll write is opinionated, not so 
much in the sense of being pro or anti (whatever) but rather 
based on experience and needs.

25+ years ago I started with C. I loved it. But then I had a 
hacker attitude, considering "raw and tough" the only choice for 
a real man *g.

It took me more than 10 years to recognize (or allow myself to 
realize) that something was quite unsatisfying about C and that, 
as much as I understood the necessity of OO and strongly desired 
io employ it,  C++ was not a solution but rather pretty much all 
disadvantages of C repeated and then a major nightmare added or, 
excuse my french, a major pain in the a**.

Even worse, software development had grown from an obsessive 
hobby (I was an electronics engineer floating more and more 
toward software) to a profession and suddenly ugly real world 
factors like efficiency and productivity entered the equation.
While tools like CodeBuilder (a C++ "Delphi") seemed to promise a 
better life they didn't deliver that much; not because they were 
bad tool but because of C++.

- big jump -

By coincidence (or fate?) I found myself confronted with a 
project demanding extreme reliability and reusability 
requirements. As much as I tried, C++ just couldn't cut it. One 
major reason might be interesting (or well known) to some of you: 
You basically can't rely in third party C++ code. Not meaning to 
talk bad about anyone but it's my bloody experience. Maybe it's 
because C++ makes it so hard to actually develop and engineer 
software (rather than hacking), maybe because C++ attracts guys 
like my earlier self (the cool C/C++ hacker), whatever the reason 
may be, that's what I experienced.

One obious (or seemingly obvious) solution was Ada. Well, no, it 
wasn't. Maybe, even probably, if I had to develop low level stuff 
for embedded stuff but not for a large application. And, that was 
a killer for me, Ada does not really support easily resizable 
arrays. To make things worse, while there nowadays is Gnat, a 
nice modern IDE, there is a major lack of libraries.

Falling over the famous Ariane 5 article I looked at Eiffel. I 
have to confess that I almost feel in love. Eiffel felt just 
right and Prof. Meyers books convinced me again and again - 
Yesss, that's the way I'd like to work and develop software.
Unfortunately, though, Prof Meyer and ISE (the Eiffel company) 
made some errors, too, and in a major way.
For a starter that whole Eiffel world is pretty much a large 
beautiful castle ... inmidst a desert. Theoretically there are 
different compilers, factually, however, ISE's Eiffelstudio is 
the only one; the others are either brutally outdated or 
non-conforming or weird niche thingies or eternally in alpha, or 
a mixture of those. And Eiffelstudio costs north of 5.000 us$. 
Sure there is a GPL version but that is available only for GPL'ed 
programs.
Next issue: Eiffels documentation is plain lousy. Yes, there are 
some 5 or so books but those are either purely theoretical or 
very outdated or both. Yes there is lots of documentation online 
but most of it basically is luring sales driven "Look how easy it 
is with Eiffel" stuff. And there is a doxygen like API doc which 
is pretty worthless for learning how to practically use the 
language.
Furthermore, while Eiffel comes with a lot on board there still 
is much missing; just as an example there are no SSL sockets 
which nowadays is a killer.

--- jump ---

So, I desperately looked for something that would offer at least 
some major goodies lik DBC and would otherwise at least not stand 
in the way of proper software engineering.

Well, that single feature, Design By Contract, led me toward D.

- practically useable modern Arrays? Check.
   And ranges, too. And very smartly thought up and designed. 
Great.
- Strings? Check
   Plus UTF, too. Even UTF-8, 16 (a very practical compromise in 
my minds eye because with 16 bits one can deal with *every* 
language while still not wasting memory).
- DBC? Check
   And once more the creators of D don't simply address one issue 
but do it elegantly and, even better, sonsistently by throwing in 
a nice use cases solution, too. Great!
- some kind of reasonable support for modern concurrency? Check
   And again, not something thrown in that the creators of D 
religously consider right (like in Eiffel where Meyer basically 
force feeds his personal religious belief, although the 
"seperate" solution is elegant). Great!
- Some of the major GUI(s)? Check.
   Well, I couldn't care less about Java (Is t possible to repeat 
all C problems creating yet another "C++" and invent a whole new 
slew of burdensome and weird sh*t? Sure, look at java!) but there 
seems to be a GTK binding.
- "defer mechanism"? Check.
    I'm pondering about some smart defer mechanism since years. Et 
voilà, D offers the "scope" mechanism. Brilliant, just f*cking 
brilliant and well made, too!
- Genericity? Check
   Genericity is a must, it's one of those things I'm just not 
willing to even discuss making compromises. Frankly, I like 
Eiffel's solution better but hey, D's solution is getting pretty 
close to what I consider excellent.

An added major plus is D's bluntly straight interface to C. And a 
vital one, too, because let's face it, not being one of the major 
players in languages basically means to either not have a whole 
lot of major and important libraries or else to (usually 
painfully) bind them. D offers an excellent solution and gives me 
the peace of mind to not paranoically care about *every* 
important library coming with it.


Criticism:

OK, I'm biased and spoiled by Eiffel but not having multiple 
inheritance is a major minus with me. D seems to compensate quite 
nicely by supporting interfaces. But: I'd like more documentation 
on that. "Go and read at wikipedia" just doesn't cut it. Please, 
kindly, work on some extensive documentation on that.
(Probably there are some more not so great points but coming from 
a strong C background, D looks 95% "natural"; sure there are 
major differences but the creators of D have very nicely managed 
to offer an extremely comfortable approach for anyone with solid 
C experience. Probably it's more troublesome for newbies but 
that's something they should write down.).

Summary: 9 out of 10 points (using int; using double I'd make it 
9.5 *g)

Sorry, this is a long and big post. But then, so too is my way 
that led me here; long, big, troublesome. And I thought that my 
(probably not everyday) set of needs and experiences might be 
interesting or useful for some others, too.
And, of course, I confess it, I just feel like throwing a very 
big "THANK YOU" at D's creators and makers. Thank you!
Aug 19 2013
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Aug 19, 2013 at 10:18:04PM +0200, Ramon wrote:
[...]
 25+ years ago I started with C. I loved it. But then I had a hacker
 attitude, considering "raw and tough" the only choice for a real man
 *g.
That was me about 20 years ago too. :)
 It took me more than 10 years to recognize (or allow myself to
 realize) that something was quite unsatisfying about C and that, as
 much as I understood the necessity of OO and strongly desired io
 employ it,  C++ was not a solution but rather pretty much all
 disadvantages of C repeated and then a major nightmare added or,
 excuse my french, a major pain in the a**.
Honestly, while OO definitely has many things to offer, I think its proponents have a tendency to push things a little too far. There are things for which OO isn't appropriate, but in languages like Java, you have to shoehorn *everything* into the OO mold, no matter what. This leads to ridiculous verbosity like: // Everything has to be a class, even if there's absolutely // nothing about main() that acts like a class! class MyLousyJavaProgram { // Are you serious? All this boilerplate just to declare // the main program?! public static void main(String[] args) throws IOException { // What, this long incantation just to print // "Hello, world!"?? System.err.println("Hello world!"); } } The signal-to-noise ratio in this code is about 1 : 6 (1 line of code that actually does the real work, 6 lines of boilerplate). Compare the equivalent D program: import std.stdio; void main() { writeln("Hello world!"); } Even C isn't as bad as the Java in this case. But I digress. Coming back to the point, C++ tries to do OO but fails miserably because it insisted on backward-compatibility with C. This was a smart move in the short term, since it helped C++ adoption from the hordes of C coders at the time, but in the long run, this has hurt C++ so much by making it impossible to fix some of the fundamental design mistakes that leads to C++ being the ugly monstrosity it is today. [...]
 By coincidence (or fate?) I found myself confronted with a project
 demanding extreme reliability and reusability requirements. As much
 as I tried, C++ just couldn't cut it. One major reason might be
 interesting (or well known) to some of you: You basically can't rely
 in third party C++ code. Not meaning to talk bad about anyone but
 it's my bloody experience. Maybe it's because C++ makes it so hard
 to actually develop and engineer software (rather than hacking),
 maybe because C++ attracts guys like my earlier self (the cool C/C++
 hacker), whatever the reason may be, that's what I experienced.
Y'know, out on the street the word is that C is outdated and dangerous and hard to maintain, and that C++ is better. But my experience -- and yours -- seems to show otherwise. You're not the first one that found C++ lacking. At my day job, we actually migrated a largish system from C++ back into C because the C++ codebase was overengineered and suffered from a lot of the flaws of C++ that only become evident once you move beyond textbook examples. (C++ actually looks rather nice in textbooks, I have to admit, but real-life code is sadly a whole 'nother story.) [...]
 So, I desperately looked for something that would offer at least
 some major goodies lik DBC and would otherwise at least not stand in
 the way of proper software engineering.
 
 Well, that single feature, Design By Contract, led me toward D.
 
 - practically useable modern Arrays? Check.
   And ranges, too. And very smartly thought up and designed. Great.
Time will tell, but I believe ranges may be one of the most significant innovations of D. It makes writing generic algorithms possible, and even pleasant, and inches us closer to the ideal of perfect code reuse than ever before.
 - Strings? Check
   Plus UTF, too. Even UTF-8, 16 (a very practical compromise in my
 minds eye because with 16 bits one can deal with *every* language
 while still not wasting memory).
Yeah, in this day and age, not having native Unicode support is simply unacceptable. The world has simply moved past the era of ASCII (and the associated gratuitously incompatible locale encodings). Neither is the lack of built-in strings (*cough*C++*cough*).
 - DBC? Check
   And once more the creators of D don't simply address one issue but
 do it elegantly and, even better, sonsistently by throwing in a nice
 use cases solution, too. Great!
Hmm. I hate to burst your bubble, but I do have to warn you that DbC in D isn't as perfect as it could be. The *language* has a pretty good design of it, to be sure, but the current implementation leaves some things to be desired. Such as the fact that contracts are run inside the function rather than on the caller's end, which leads to trouble when you're writing libraries to be used by 3rd party code -- if the library is closed-source, there's no way to enforce the contracts in the API. [...]
 - Some of the major GUI(s)? Check.
   Well, I couldn't care less about Java (Is t possible to repeat all
 C problems creating yet another "C++" and invent a whole new slew of
 burdensome and weird sh*t? Sure, look at java!) but there seems to
 be a GTK binding.
This is one area that people complain about from time to time, though. But I'm no GUI programmer so I can't say too much about this.
 - "defer mechanism"? Check.
    I'm pondering about some smart defer mechanism since years. Et
 voilà, D offers the "scope" mechanism. Brilliant, just f*cking
 brilliant and well made, too!
The irony is that the rest of D is so well designed that scope guards are hardly ever used in practice. :) At my work I have to deal with C (and occasionally C++) code, and you cannot imagine how much I miss D's scope guards. If I were granted a wish for how to lessen the pain of coding in C, one of the first things I'd ask for is scope.
 - Genericity? Check
   Genericity is a must, it's one of those things I'm just not
 willing to even discuss making compromises. Frankly, I like Eiffel's
 solution better but hey, D's solution is getting pretty close to
 what I consider excellent.
The lack of genericity is what kept me away from things like Java. Sure, Java added their so-called generics after the fact, but it still leaves much to be desired. Java's generics lack the power of C++ templates, and that's a big minus for me (even though C++ templates are, frankly, an utter mess, which gave templates the unfortunate reputation of being hard to understand, when in fact, if they were done right like in D, they're actually very natural to work with and extremely powerful). Couple D's templates with other D innovations like signature constraints and CTFE (compile-time function evaluation), and you have a system that could trump C++ templates any day with one hand. This is one of the areas where D really shines. You can write truly reusable code that isn't straitjacketed or otherwise crippled. Throw in UFCS (uniform function call syntax) into the mix, and you have something where you can write functional-style code in D and still have the efficiency of native compilation. :) Admittedly, this area of D is still somewhat rough in some parts, but what's there is already pretty impressive, and I believe will only get better as we iron out those wrinkles. [...]
 Criticism:
 
 OK, I'm biased and spoiled by Eiffel but not having multiple
 inheritance is a major minus with me. D seems to compensate quite
 nicely by supporting interfaces. But: I'd like more documentation on
 that. "Go and read at wikipedia" just doesn't cut it. Please,
 kindly, work on some extensive documentation on that.
[...] It would be great if you could contribute to the docs. Documentation is one area where D doesn't quite shine -- which is a pity, because we have such a great language in our hands yet people's first impression of it may not be that great when they see the sorry state of our docs. Pull requests to improve docs are greatly welcomed here. :) T -- I am a consultant. My job is to make your job redundant. -- Mr Tom
Aug 19 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2013 2:17 PM, H. S. Teoh wrote:
 Time will tell, but I believe ranges may be one of the most significant
 innovations of D. It makes writing generic algorithms possible, and even
 pleasant, and inches us closer to the ideal of perfect code reuse than
 ever before.
While not unique to D, I believe that ranges will become a killer feature - killer enough that languages that don't support pipeline programming will start looking like propeller driven airliners. We still have a ways to go yet - Phobos support for ranges is not ubiquitous - but ranges are the future.
Aug 19 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 19 August 2013 at 22:00:17 UTC, Walter Bright wrote:
 On 8/19/2013 2:17 PM, H. S. Teoh wrote:
 Time will tell, but I believe ranges may be one of the most 
 significant
 innovations of D. It makes writing generic algorithms 
 possible, and even
 pleasant, and inches us closer to the ideal of perfect code 
 reuse than
 ever before.
While not unique to D, I believe that ranges will become a killer feature - killer enough that languages that don't support pipeline programming will start looking like propeller driven airliners. We still have a ways to go yet - Phobos support for ranges is not ubiquitous - but ranges are the future.
Is there an official "everything that sensibly can provide a range, should do" policy for phobos? If so, there's quite a bit of low-hanging fruit.
Aug 19 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/19/2013 3:10 PM, John Colvin wrote:
 On Monday, 19 August 2013 at 22:00:17 UTC, Walter Bright wrote:
 We still have a ways to go yet - Phobos support for ranges is not ubiquitous -
 but ranges are the future.
Is there an official "everything that sensibly can provide a range, should do" policy for phobos? If so, there's quite a bit of low-hanging fruit.
There is as far as I'm concerned. Note the holding back of std.serialization until it has full support for ranges.
Aug 19 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-08-20 00:14, Walter Bright wrote:

 Note the holding back of std.serialization until it has full support for
 ranges.
I guess we won't see any std.serialization then. It cannot fully support ranges until the backend does, in this case std.xml. -- /Jacob Carlborg
Aug 20 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 12:02 AM, Jacob Carlborg wrote:
 On 2013-08-20 00:14, Walter Bright wrote:

 Note the holding back of std.serialization until it has full support for
 ranges.
I guess we won't see any std.serialization then. It cannot fully support ranges until the backend does, in this case std.xml.
Why not?
Aug 20 2013
parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 20 August 2013 at 07:21:43 UTC, Walter Bright wrote:
 On 8/20/2013 12:02 AM, Jacob Carlborg wrote:
 On 2013-08-20 00:14, Walter Bright wrote:

 Note the holding back of std.serialization until it has full 
 support for
 ranges.
I guess we won't see any std.serialization then. It cannot fully support ranges until the backend does, in this case std.xml.
Why not?
As far as I understand the problem, current std.xml implementation does not allow to implement lazy range-based archiver in terms of Phobos. Not however, that I have not delayed voting until std.serialization gets full support of ranges - only until its API gets support for ranges such that implementation can be later added in a non-breaking way, for example, with new archivers. There is some small discussion on this topic. Unfortunately I had not take the time to study source deep enough to state what reasonable requirements can be here (ones that won't require Jacob to re-implelement half of the package) but I am definitely going to.
Aug 20 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 3:52 AM, Dicebot wrote:
 On Tuesday, 20 August 2013 at 07:21:43 UTC, Walter Bright wrote:
 On 8/20/2013 12:02 AM, Jacob Carlborg wrote:
 On 2013-08-20 00:14, Walter Bright wrote:

 Note the holding back of std.serialization until it has full support for
 ranges.
I guess we won't see any std.serialization then. It cannot fully support ranges until the backend does, in this case std.xml.
Why not?
As far as I understand the problem, current std.xml implementation does not allow to implement lazy range-based archiver in terms of Phobos. Not however, that I have not delayed voting until std.serialization gets full support of ranges - only until its API gets support for ranges such that implementation can be later added in a non-breaking way, for example, with new archivers. There is some small discussion on this topic. Unfortunately I had not take the time to study source deep enough to state what reasonable requirements can be here (ones that won't require Jacob to re-implelement half of the package) but I am definitely going to.
Sounds reasonable. Thanks for following up on this.
Aug 20 2013
prev sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/08/13 00:00, Walter Bright wrote:
 While not unique to D, I believe that ranges will become a killer feature -
 killer enough that languages that don't support pipeline programming will start
 looking like propeller driven airliners.
On that note -- I was chatting with a (very functional- and Lisp-oriented) friend about D and, when ranges were mentioned, he immediately connected it with Clojure's concept of "sequences": http://clojure.org/sequences Does anyone know the history/relationship here between these and D's ranges? Was it a direct influence from D, or convergent evolution -- and can anyone comment on the relative merits of the D vs. Clojure approaches?
Aug 20 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 7:21 AM, Joseph Rushton Wakeling wrote:
 On 20/08/13 00:00, Walter Bright wrote:
 While not unique to D, I believe that ranges will become a killer feature -
 killer enough that languages that don't support pipeline programming will start
 looking like propeller driven airliners.
On that note -- I was chatting with a (very functional- and Lisp-oriented) friend about D and, when ranges were mentioned, he immediately connected it with Clojure's concept of "sequences": http://clojure.org/sequences Does anyone know the history/relationship here between these and D's ranges? Was it a direct influence from D, or convergent evolution -- and can anyone comment on the relative merits of the D vs. Clojure approaches?
This style of programming has been around at least since the Unix "pipes and However, LINQ and Clojure were not direct influences on D's ranges.
Aug 20 2013
next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/08/13 19:47, Walter Bright wrote:
 This style of programming has been around at least since the Unix "pipes and


 However, LINQ and Clojure were not direct influences on D's ranges.
Since Clojure is more recent than D, and AFAICT its sequences API seems to have arrived in later versions of the language, I wondered if the influence had been in the opposite direction. When were ranges first introduced in D?
Aug 20 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 11:17 AM, Joseph Rushton Wakeling wrote:
 On 20/08/13 19:47, Walter Bright wrote:
 This style of programming has been around at least since the Unix "pipes and


 However, LINQ and Clojure were not direct influences on D's ranges.
Since Clojure is more recent than D, and AFAICT its sequences API seems to have arrived in later versions of the language, I wondered if the influence had been in the opposite direction. When were ranges first introduced in D?
Eh, I'd have to go back through the github history :-( The idea goes way back. Matthew Wilson and I were thinking about how to do C++ STL-like iterators, which were based on a pointer abstraction. I thought it was natural for D to do it as an array abstraction. There are some posts about it in the n.g. somewhere. The idea languished until Andrei joined us and figured out how it should work, and ranges were born.
Aug 20 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/20/13 11:32 AM, Walter Bright wrote:
 On 8/20/2013 11:17 AM, Joseph Rushton Wakeling wrote:
 On 20/08/13 19:47, Walter Bright wrote:
 This style of programming has been around at least since the Unix
 "pipes and


 However, LINQ and Clojure were not direct influences on D's ranges.
Since Clojure is more recent than D, and AFAICT its sequences API seems to have arrived in later versions of the language, I wondered if the influence had been in the opposite direction. When were ranges first introduced in D?
Eh, I'd have to go back through the github history :-( The idea goes way back. Matthew Wilson and I were thinking about how to do C++ STL-like iterators, which were based on a pointer abstraction. I thought it was natural for D to do it as an array abstraction. There are some posts about it in the n.g. somewhere. The idea languished until Andrei joined us and figured out how it should work, and ranges were born.
http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com Andrei
Aug 20 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 1:49 PM, Andrei Alexandrescu wrote:
 On 8/20/13 11:32 AM, Walter Bright wrote:
 The idea goes way back. Matthew Wilson and I were thinking about how to
 do C++ STL-like iterators, which were based on a pointer abstraction. I
 thought it was natural for D to do it as an array abstraction. There are
 some posts about it in the n.g. somewhere. The idea languished until
 Andrei joined us and figured out how it should work, and ranges were born.
http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com
Awesome, thanks for digging that up!
Aug 20 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 01:49:41PM -0700, Andrei Alexandrescu wrote:
 On 8/20/13 11:32 AM, Walter Bright wrote:
[...]
The idea goes way back. Matthew Wilson and I were thinking about how
to do C++ STL-like iterators, which were based on a pointer
abstraction. I thought it was natural for D to do it as an array
abstraction. There are some posts about it in the n.g. somewhere. The
idea languished until Andrei joined us and figured out how it should
work, and ranges were born.
http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com
[...] Wow. That must've been an awesome discussion. I can literally feel it oozing with excitement. :) We should put this link up on the wiki somewhere, maybe under "landmark historical documents" or something. T -- If a person can't communicate, the very least he could do is to shut up. -- Tom Lehrer, on people who bemoan their communication woes with their loved ones.
Aug 20 2013
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/20/13 2:03 PM, H. S. Teoh wrote:
 On Tue, Aug 20, 2013 at 01:49:41PM -0700, Andrei Alexandrescu wrote:
 On 8/20/13 11:32 AM, Walter Bright wrote:
[...]
 The idea goes way back. Matthew Wilson and I were thinking about how
 to do C++ STL-like iterators, which were based on a pointer
 abstraction. I thought it was natural for D to do it as an array
 abstraction. There are some posts about it in the n.g. somewhere. The
 idea languished until Andrei joined us and figured out how it should
 work, and ranges were born.
http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com
[...] Wow. That must've been an awesome discussion. I can literally feel it oozing with excitement. :) We should put this link up on the wiki somewhere, maybe under "landmark historical documents" or something.
I was incredibly excited - so much, in fact, that I decided to share all that following a year-long absence. Andrei
Aug 20 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/20/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com
[...] Wow. That must've been an awesome discussion. I can literally feel it oozing with excitement. :) We should put this link up on the wiki somewhere, maybe under "landmark historical documents" or something.
There's also this: http://forum.dlang.org/thread/gacvj5$28ec$1 digitalmars.com
Aug 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 11:19:45PM +0200, Andrej Mitrovic wrote:
 On 8/20/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com
[...] Wow. That must've been an awesome discussion. I can literally feel it oozing with excitement. :) We should put this link up on the wiki somewhere, maybe under "landmark historical documents" or something.
There's also this: http://forum.dlang.org/thread/gacvj5$28ec$1 digitalmars.com
Sadly, the link to Andrei's presumably DDoc-generated page is no longer working. :-( T -- Life would be easier if I had the source code. -- YHL
Aug 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 11:19:45PM +0200, Andrej Mitrovic wrote:
 On 8/20/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 http://forum.dlang.org/thread/ga46ok$2s77$1 digitalmars.com
[...] Wow. That must've been an awesome discussion. I can literally feel it oozing with excitement. :) We should put this link up on the wiki somewhere, maybe under "landmark historical documents" or something.
There's also this: http://forum.dlang.org/thread/gacvj5$28ec$1 digitalmars.com
Also, I've found evidence that transient ranges *were* discussed before: http://forum.dlang.org/post/MPG.2334cabf1233057c9896e4 news.digitalmars.com :-) And now I know who to blame^W I mean, praise, for the names of .front and .back: http://forum.dlang.org/post/gaf59c$on7$1 digitalmars.com In any case, I'm impressed by the sheer volume of bikeshedding going on in that thread! It almost puts to shame our rainbow-free discussions these days. :-P T -- Never trust an operating system you don't have source for! -- Martin Schulze
Aug 20 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/20/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 Sadly, the link to Andrei's presumably DDoc-generated page is no longer
 working.  :-(
First place too look is archive.org, here you go: http://web.archive.org/web/20081022094123/http://ssli.ee.washington.edu/~aalexand/d/tmp/std_range.html
Aug 20 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/20/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 And now I know who to blame^W I mean, praise, for the names of .front
 and .back:

 	http://forum.dlang.org/post/gaf59c$on7$1 digitalmars.com
I don't know about you, but I'd be a little scared if I had to call popToe(). :P
Aug 20 2013
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/08/13 23:43, H. S. Teoh wrote:
 And now I know who to blame^W I mean, praise, for the names of .front
 and .back:

 	http://forum.dlang.org/post/gaf59c$on7$1 digitalmars.com

 In any case, I'm impressed by the sheer volume of bikeshedding going on
 in that thread! It almost puts to shame our rainbow-free discussions
 these days. :-P
Fantastic to get this insight into D's history. :-)
Aug 21 2013
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/20/2013 08:17 PM, Joseph Rushton Wakeling wrote:
 Since Clojure is more recent than D, and AFAICT its sequences API seems
 to have arrived in later versions of the language, I wondered if the
 influence had been in the opposite direction.
Unlikely. Stream processing has a long tradition in the lisp community. D sacrifices some of its elegance, presumably for more predictable performance. (Though some of it could be compensated for by having a usable function in std.range that forgets the concrete range type and makes all ranges of the same element type interchangeable. inputRangeObject exists but it is not usable, and also not efficient.)
Aug 20 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 08:53:02PM +0200, Timon Gehr wrote:
 On 08/20/2013 08:17 PM, Joseph Rushton Wakeling wrote:
Since Clojure is more recent than D, and AFAICT its sequences API seems
to have arrived in later versions of the language, I wondered if the
influence had been in the opposite direction.
Unlikely. Stream processing has a long tradition in the lisp community. D sacrifices some of its elegance, presumably for more predictable performance. (Though some of it could be compensated for by having a usable function in std.range that forgets the concrete range type and makes all ranges of the same element type interchangeable. inputRangeObject exists but it is not usable, and also not efficient.)
Hmm. Maybe something like this? interface GenericInputRange(E) { property bool empty(); property E front(); void popFront(); } GenericInputRange!E genericInputRange(E,R)(R range) if (is(ElementType!R == E)) { class GenericInputRangeImpl : GenericInputRange!E { R impl; this(R range) { impl = range; } override property bool empty() { return impl.empty; } override property E front() { return impl.front; } override void popFront() { impl.popFront(); } } return new GenericInputRangeImpl(range); } // insert adaptations for forward ranges, et al, here. I think this might actually be more useful than the current std.range.inputRangeObject, since this lets you interchange ranges of different underlying types as long as their element types are the same. Using an interface rather than a base class also lets user code adapt their own classes to work with the generic range interface. T -- Let's not fight disease by killing the patient. -- Sean 'Shaleh' Perry
Aug 20 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/20/13 10:47 AM, Walter Bright wrote:
 On 8/20/2013 7:21 AM, Joseph Rushton Wakeling wrote:
 On 20/08/13 00:00, Walter Bright wrote:
 While not unique to D, I believe that ranges will become a killer
 feature -
 killer enough that languages that don't support pipeline programming
 will start
 looking like propeller driven airliners.
On that note -- I was chatting with a (very functional- and Lisp-oriented) friend about D and, when ranges were mentioned, he immediately connected it with Clojure's concept of "sequences": http://clojure.org/sequences Does anyone know the history/relationship here between these and D's ranges? Was it a direct influence from D, or convergent evolution -- and can anyone comment on the relative merits of the D vs. Clojure approaches?
This style of programming has been around at least since the Unix "pipes However, LINQ and Clojure were not direct influences on D's ranges.
It's a common omission to equate D's ranges with pipes/filters. That misses the range categorization, which is inspired from C++ iterators. A relatively accurate characterization of D ranges is a unification of C++ iterators with pipes/filters. Andrei
Aug 20 2013
next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 08/20/2013 01:43 PM, Andrei Alexandrescu wrote:

 the range categorization, which is inspired from C++ iterators.
All of that is explained in your "On Iteration" article: http://www.informit.com/articles/article.aspx?p=1407357 Ali
Aug 20 2013
prev sibling next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 20/08/13 22:43, Andrei Alexandrescu wrote:
 It's a common omission to equate D's ranges with pipes/filters. That misses the
 range categorization, which is inspired from C++ iterators.

 A relatively accurate characterization of D ranges is a unification of C++
 iterators with pipes/filters.
I'm not sure I quite follow this point. The Clojure sequence API has all the stuff you'd expect from the range interface -- empty?, first (front), next (popFront), nnext (popFrontN), last (back), drop-last (popBack), ... Is the point here that in Clojure these are all implemented as pipes/filters on top of singly-linked lists, whereas in D range interfaces are a "first-class" part of the language that is agnostic about the underlying data structures?
Aug 21 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/21/13 5:39 AM, Joseph Rushton Wakeling wrote:
 On 20/08/13 22:43, Andrei Alexandrescu wrote:
 It's a common omission to equate D's ranges with pipes/filters. That
 misses the
 range categorization, which is inspired from C++ iterators.

 A relatively accurate characterization of D ranges is a unification of
 C++
 iterators with pipes/filters.
I'm not sure I quite follow this point. The Clojure sequence API has all the stuff you'd expect from the range interface -- empty?, first (front), next (popFront), nnext (popFrontN), last (back), drop-last (popBack), ...
No random access. I didn't know about drop-last though - does it work in O(1)?
 Is the point here that in Clojure these are all implemented as
 pipes/filters on top of singly-linked lists, whereas in D range
 interfaces are a "first-class" part of the language that is agnostic
 about the underlying data structures?
More accurately was the point that Clojure's sequence API is (to the best of my understanding) only dealing with forward access, whereas D distinguishes between one-pass, forward, bidirectional, and random, and designs algorithms around these notions. Andrei
Aug 21 2013
parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 21 August 2013 at 17:48:49 UTC, Andrei Alexandrescu 
wrote:
 No random access. I didn't know about drop-last though - does 
 it work in O(1)?
There is "nth" <http://clojure.github.io/clojure/clojure.core-api.html#clojure.core/nth> but the O(n) cited there is rather disturbing.
 More accurately was the point that Clojure's sequence API is 
 (to the best of my understanding) only dealing with forward 
 access, whereas D distinguishes between one-pass, forward, 
 bidirectional, and random, and designs algorithms around these 
 notions.
I'll check up with my friend on the forward access side. What certainly seems to be true is that the API doesn't make the useful distinctions/classifications that D does.
Aug 22 2013
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Aug 21, 2013 at 02:39:34PM +0200, Joseph Rushton Wakeling wrote:
 On 20/08/13 22:43, Andrei Alexandrescu wrote:
It's a common omission to equate D's ranges with pipes/filters. That misses the
range categorization, which is inspired from C++ iterators.

A relatively accurate characterization of D ranges is a unification of C++
iterators with pipes/filters.
I'm not sure I quite follow this point. The Clojure sequence API has all the stuff you'd expect from the range interface -- empty?, first (front), next (popFront), nnext (popFrontN), last (back), drop-last (popBack), ... Is the point here that in Clojure these are all implemented as pipes/filters on top of singly-linked lists, whereas in D range interfaces are a "first-class" part of the language that is agnostic about the underlying data structures?
His point is that C++ iterators have input/forward/bidirectional classifications, whereas the sequences in functional languages typically don't. D's ranges therefore is a sort of integration of sequences in functional languages with C++'s input/forward/bidirectional hierarchical iteration scheme. T -- Amateurs built the Ark; professionals built the Titanic.
Aug 21 2013
prev sibling next sibling parent reply "Ramon" <spam thanks.no> writes:
On Monday, 19 August 2013 at 21:19:05 UTC, H. S. Teoh wrote:
 Honestly, while OO definitely has many things to offer, I think 
 its
 proponents have a tendency to push things a little too far. 
 There are
 things for which OO isn't appropriate, ...
Nope, I don't belong to the OOP fanatics in the sense of "everything must be a class"; actually that's one of my critical remarks on Eiffel. But, no doubt, OOP is an absolute must have. I remember my (seen from today pathetic) desperate attempts to use C at least in a somewhat OO way (abusing structs).
 ... C++ tries to do OO but fails
 miserably because it insisted on backward-compatibility with C.
I'm not sure that I can agree on that. I'll elaborate somewhat on this issue for reasons that directly touch D. Actually C backward compatibility *could* have been implemented in D, too, if at quite some cost for the implementers; after all it's no coincidence that D feel pretty much "at home" for a C programmer - although the D people probably came to the conclusion that being similar enough was enough or even better than to go for full C compatibility (= capable to compile C code) and rightly so. I'm sure that basically every C programmer looking at another language doesn't look for another C but rather for something he can quickly master to a useable degree; otherwise he wouldn't be looking for another language in the first place. As far as I'm concerned D's superiority over C++ has another reason: D was conceived from a _pragmatic approach_. While it's a great thing to have scientists to design new paradigms and concepts it's, I'm convinced, a bad idea to have them actually go to the end; the perspectives, weighting of criteria and other issues usually happen to be a very different mix from what practical use would suggest. Just look at C. We didn't miss so much new concepts and paradigms (except maybe OO) but rather were swearing at quite pragmatic aspects like lots of housekeeping overhead for resizing an array and many other things and at stupid problems related to pointers (like array being automatically being passed as pointers). I remember quite well that when wanting proper error handling, some logging and maybe message strings in 2 languages one quickly ended up with most code not related to the algorithm or problem at hand but to those housekeeping chores and alike. In summary my impression is that C++ was created by scientists who somehow shoehorned a set of (what they considered) concepts and paradigms into a language that looked good in lectures - but they didn't care sh*t about compiler writers and actual users who had to solve actual problems. While many considered STL a breakthrough milestone for mankind I personally, Pardon me, always considered it a) a weird mess and b) a confession that C++ was f*cked up.
 Y'know, out on the street the word is that C is outdated and 
 dangerous
 and hard to maintain, and that C++ is better.  But my 
 experience -- and
 yours -- seems to show otherwise. You're not the first one that 
 found
 C++ lacking. At my day job, we actually migrated a largish 
 system from
 C++ back into C because the C++ codebase was overengineered and 
 suffered
 from a lot of the flaws of C++ that only become evident once 
 you move
 beyond textbook examples. (C++ actually looks rather nice in 
 textbooks,
 I have to admit, but real-life code is sadly a whole 'nother 
 story.)
See my point above and: I couldn't care less what's the current hype in Dr.Dobbs or Computer Language (anyone remembers that?) or for that matter the "word on the street". Actually I tought C at that time and I remember to comment students remarks concerning OO, C++ and the like along this line "C basically is a - pretty well done - attempt to create a cross platform assembler with quite some comfort". After all, this shouldn't be forgotten, C was created to use the wonderful new PDP11 (by brilliant men who had understood that a) there would be other architectures and systems to follow and b) one shouldn't have to learn 10 assemblers for 10 systems. C++'s proponents stating that C++ was meant to be an better C with OO on top never had any credibility considering the background and origin and raison d'etre of C as far as I'm concerned.
 - practically useable modern Arrays? Check.
   And ranges, too. And very smartly thought up and designed. 
 Great.
Time will tell, but I believe ranges may be one of the most significant innovations of D. It makes writing generic algorithms possible, and even pleasant, and inches us closer to the ideal of perfect code reuse than ever before.
I'm afraid I can't follow your "ranges making generic algorithms possible" but I do agree that ranges are among the more important strengths of D, greatly contributing to useability and productivity. As for generics I still need to learn more about D but from first impressions and comments I'm not sure that D has reached a final and polished state there; possibly there might be even a point where the D guys decide to chose a completely new approach in D3. One thing is clear (well, to me at least), doing generics simply like templates will turn out to be a major limitation and sometimes even more of a problem than a solution; one ugly issue coming to mind is having "filled in" code spread all over the place. Maybe I'm stubborn here but I do not agree with the usual approach of seing generics = templates = saving typing effort. Typing efforts are to be addressed by editors not by language designers. Generics are about implementing algorithms (that are not type dependent anyway). Needing a min(), say for strings, ints and floats in a program one shouldn't end up with 3 times code but with code dealing with e.g. "any scalar" or even "anything comparable". But again, I'm too fresh at D and, for instance, have to first learn a whole lot more (like interface mechanisms) before coming to a well based assessment.
 - DBC? Check
   ...
Hmm. I hate to burst your bubble, but I do have to warn you that DbC in D isn't as perfect as it could be. The *language* has a pretty good design of it, to be sure, but the current implementation leaves some things to be desired. Such as the fact that contracts are run inside the function rather than on the caller's end, which leads to trouble when you're writing libraries to be used by 3rd party code -- if the library is closed-source, there's no way to enforce the contracts in the API.
Huh? Of course contracts are *in* the functions/methods. Typically (and reasonably) you have some kind of "in" contract and an "out" contract, the former ensuring the function to work on agreed and reasonable entry grounds, the letter ensuring that the function exits in an "O.K." state. Plus, of course invariants. The trouble with 3rd party libs, I don't see it. The lib provides contract terms along wih the API and that's it. I do agree though (*if* I got that right in the docs so far) that there shouldn't simply be a "debugging - DBC on" or "production - DBC off". There should also be some direct mechanism for the developer to have certain contracts active even in production mode.
 The irony is that the rest of D is so well designed that scope 
 guards
 are hardly ever used in practice. :) At my work I have to deal 
 with C
 (and occasionally C++) code, and you cannot imagine how much I 
 miss D's
 scope guards. If I were granted a wish for how to lessen the 
 pain of
 coding in C, one of the first things I'd ask for is scope.
Can't follow there. If, for instance, I open a file I want to make sure that it's properly closed. That's nothing to do with the language but with the world our code runs in, no?
 OK, I'm biased and spoiled by Eiffel but not having multiple
 inheritance is a major minus with me. D seems to compensate 
 quite
 nicely by supporting interfaces. But: I'd like more 
 documentation on
 that. "Go and read at wikipedia" just doesn't cut it. Please,
 kindly, work on some extensive documentation on that.
[...] It would be great if you could contribute to the docs. Documentation is one area where D doesn't quite shine -- which is a pity, because we have such a great language in our hands yet people's first impression of it may not be that great when they see the sorry state of our docs. Pull requests to improve docs are greatly welcomed here. :)
Well, while I'd be glad to contribute I'm afraid that's a dog biting his own tail problem. I need good doc to understand before I can explain to others ... I'm afraid I'll have to fall on the nerves of the creators for that.
Aug 19 2013
next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 20 August 2013 at 00:08:31 UTC, Ramon wrote:
 Needing a min(), say for strings, ints and floats in a program 
 one shouldn't end up with 3 times code but with code dealing 
 with e.g. "any scalar" or even "anything comparable".
No matter how you cut it, you have to pay for dealing with different types in one function. Either by code-bloat or indirection. The asm for ints, floats, reals etc. are all different and require different code. See here: http://forum.dlang.org/post/mailman.213.1376962388.1719.digitalmars-d puremagic.com
Aug 20 2013
prev sibling parent reply "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
Shuffling your reply.

On Tuesday, 20 August 2013 at 00:08:31 UTC, Ramon wrote:
 Huh? Of course contracts are *in* the functions/methods. 
 Typically (and reasonably) you have some kind of "in" contract 
 and an "out" contract, the former ensuring the function to work 
 on agreed and reasonable entry grounds, the letter ensuring 
 that the function exits in an "O.K." state. Plus, of course 
 invariants.
He is referring to where the compiler generates the code. When compiling a library it is at that point the contracts part of the code or not. In reality the user of the library should decide when the contracts are included (normally a debug build of a program would link a release build of the library, thus contracts would not be checked).
 I'm afraid I can't follow your "ranges making generic 
 algorithms possible" but I do agree that ranges are among the 
 more important strengths of D, greatly contributing to 
 useability and productivity.
I don't think anyone takes your criticism of D generics harshly, but many, like me, may feel confused on your issue with using templates for generics. Personally I'm not familiar with Eiffel or some of the more generics and object base polymorphic behavior. You are correct, D documentation does not touch on how to use or structure code with interfaces or classes. While I couldn't take issue with having such documentation for D, this is a subject generally left to a text book or third party article (I would expect equal difficulty learning the subject from Java documentation). Evidence for the statement, "ranges making generic algorithms possible" can be found in std.algorithms. This module provides many algorithms to perform against data and most (all?) make use of Ranges, keeping the algorithm generic from the data type being manipulated and even the container storing that data. The library also is mostly (100%?) templated. In D it tends to be idiomatic to specify what you will use and not why structure you expect. So instead of: void foo(IComparable[] data)... We write: void foo(Range)(Range data) if(isComparable!(ElementType!Range))... Where isComparable would need to be defined and ElementType can be found in std.range. So D's generics may be lacking, the ability to write generic algorithms is still there.
Aug 20 2013
parent reply "Ramon" <spam thanks.no> writes:
On Wednesday, 21 August 2013 at 05:41:40 UTC, Jesse Phillips
wrote:
 Personally I'm not familiar with Eiffel or some of the more 

 generics and object base polymorphic behavior.
Finally someone says it honestly, frankly and straight out. Thank you for that. I mean it. And I understand you perfectly well. Having quired a considerable amount of knowledge and experience with a language one , of course, tends to develop a certain relationship and a habit to understand every (programming related) problem as "How could tat be done in 'my' language?" This goes further than we sometimes (like to) see. The "Polymorphism is bad/troublesome" credo, for instance, has basically nothing to do with polymorphism and pretty everything with C++'s attempt at it. We may like it or not but language designers, being human beings, have beliefs, intentions and goals when creating a language. And it's no secret; after all this mix (of beliefs, goals, etc.) is a major driving force to energize the tedious process. D's creators spell it out quite frankly and it comes down to something like "To create the language C and C++ should have been". Please note that I do not judge that as good, bad or whatever and, in fact, I do agree that this is actually a constructive and valid approach. There are even people around who created (usually script) languages with games in mind. Ada didn't need such a kind of belief/goal mix; it was put right in front of them by the usa dod. One, probably *the* major issue was reliability, correctness and, if any feasible, verifyability. And this mind- and goalset very much shapes the outcome. That is why in Ada ':=' is used as assignment operator while in C, C++ and D '=' is used. And then there is Eiffel, which is very interesting if for one issue alone: It's creator wanted to create "The whole process from design down to code right". And, unlike what many think, Prof Meyer/Eiffel *does* care about practical aspects. I myself was turned off by Ada's and Eiffel's (and Pascal's and its derivates) use of ':=' for assigment (which on a german keyboard is even more unpleasant than on a querty version). Until I read real world statistics (which well matched my own experience) that showed how error prone the '=' way is. The killer argument, however, (in my case) was delivered by Prof. Meyer in a statement along the line "It doesn't matter how fast your code is if it breaks" Reading through this forum, however, "performance" and not doing anything that might negatively influence performce is what Mother Mary is to Christians, very very holy. Again, this mindset is perfectly valid and OK. But one should at least be really and seriously conscious of it. Polymorphism, for instance, *can* be done and done well, up to the point of being a treasure trove and a realiable and efficient was to do things. It will, however, never even being seriously considered by a C/C++ mindset considering it evil and whatnot. To be frank, I will try to find a way to use D (which I still consider a great language!) in a rather ignorant and Eiffel-like way, as well as this can be done (Yes, I will even use ':=' and have it auto-replaced before compiling). For two simple reasons: Probably I'm to "rotten" to happily use anything else than a modern C family Language (D) but I'm not willing to simply forget what important lessons I learned while wandering through other parts of the programming world. The second reason is that D (for my personal taste) is too C/C++ in the sense of a "Well, paint something here, glue something on top there". I read again and again how elegant and well readable D is. Sorry, no, it isn't, or, to be more correct, it is only if seen from someone who has been exposed to (and/or possibly likes) C/C++ cryptic (~non natural for a human) looks. Funnily one thing I particularly love with D, the "scopes", is something I know - less well done than in D - from Eiffel and which is credo of mine since long: Forget the f*cking "try"!. Either be honestly careless )"Hey, it's just an unimportant script kind a thingy") or - very reasonably - assume that pretty everything can go wrong. "try" somehow (stupidly, I feel) implies that there is "innocent" code and "potentially dangerous" code. BS! On a modern system with zillions sloc written by thousands of varying levels of professionality *nothing* can be assumed to be innocent.
 In D it tends to be idiomatic to specify what you will use and 
 not why structure you expect. So instead of:

     void foo(IComparable[] data)...

 We write:

     void foo(Range)(Range data) 
 if(isComparable!(ElementType!Range))...

 Where isComparable would need to be defined and ElementType can 
 be found in std.range.

 So D's generics may be lacking, the ability to write generic 
 algorithms is still there.
As long as I can have "generics" and not merely a very smart fill-in template system - or, even better, both - I'm content enough. BTW: One of the reasons I'm discussing (probably to an extent that many feel going to far) is because I know myself. It is *now* (with some spare time and not yet deeply involved actually using D) or never. Once I'm in everyday use mode I'll do what pretty everyone does, namely, code by using what is there and available; whatever I'll miss I will somehow work around, peeling apples with spoons ...
Aug 21 2013
next sibling parent reply "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Wednesday, 21 August 2013 at 13:54:59 UTC, Ramon wrote:
 So D's generics may be lacking, the ability to write generic 
 algorithms is still there.
As long as I can have "generics" and not merely a very smart fill-in template system - or, even better, both - I'm content enough.
This is the statement which throws me. Templates are generic, there is this feature called "generics" which is an implementation to provide generic functions/classes (D does not have). But the concept of making things generic is perfectly achievable with templates. Since I don't know Eiffel I don't know the exact approach to generics you're interested in, but D provides Templates to handle generics. D also provides Interfaces and Inheritance for polymorphism, which may be what is of interest to you, but those aren't generics. I hope D does what you want well enough, but which how you talk of generics I just don't know what you mean. And I don't write this as an attack, only to try and explain my confusion. (maybe others can correct my understanding).
Aug 21 2013
parent reply "Ramon" <spam thanks.no> writes:
On Wednesday, 21 August 2013 at 14:38:41 UTC, Jesse Phillips 
wrote:
 On Wednesday, 21 August 2013 at 13:54:59 UTC, Ramon wrote:
 So D's generics may be lacking, the ability to write generic 
 algorithms is still there.
As long as I can have "generics" and not merely a very smart fill-in template system - or, even better, both - I'm content enough.
This is the statement which throws me. Templates are generic, there is this feature called "generics" which is an implementation to provide generic functions/classes (D does not have). But the concept of making things generic is perfectly achievable with templates. Since I don't know Eiffel I don't know the exact approach to generics you're interested in, but D provides Templates to handle generics. D also provides Interfaces and Inheritance for polymorphism, which may be what is of interest to you, but those aren't generics. I hope D does what you want well enough, but which how you talk of generics I just don't know what you mean. And I don't write this as an attack, only to try and explain my confusion. (maybe others can correct my understanding).
Last thing first: Yes, it seems that I can achieve a quite comfortable compromise with D. As for generics, let me put it this way: In Eiffel generics have been an integral part of the language design from the beginning. In D ways and mechanisms are provided to achieve what quite usually is the goal of generics, namely generic algorithms in way, i.e. by having to write code for an algorithm just once. That might seem to be a minor difference, in particular when looking from a "Huh? I can get it done, so what's the fuss all about, eh?" perspective. Of course, there the C and successors worlds proponents are right, this incurs a price (which templates do, too ...) and, yes, in the end, somewhere someone or something must sort the types out anyway (because of the way CPUs work). Actually, this point shows quite nicely what I consider the *real* issue behind it. In C/C++/D it's all about performance and other technical criteria. That is one valid was to look at things, no doubts. But there are others, too. In Eiffel (and to a degree others) it's not about technical issues and details about about design and humans. It's simply a very different approach putting less weight on performance (which is not lousy, anyway) and more on reliability, proper design etc. Would I use Eiffel for an RTOS on a 16bit CPU? No way. Would I use C++ for a critical application, say a medical ventilator? No way! Actually, D, no matter how much one might fight me for saying that, goes a considerable distance to languages like Ada or Eiffel. The main difference is that the D people are maniac on performance (and staying C/C++-like enough) and implement those important mechanisms (so it seems to me) for very much different reasons than Ichbiah (Ada) or Meyer (Eiffel) and much as "extensions to C/C++/D" where Meyer et al. arrived from a very different point of interest. Another example is data types, concretely integers. Ada offers a nice way do precisely nail down precision/storage. If I want to store days_of_month I can have an integer type holding ints between 1 and 31 (which, due to the way they implemented it can be a PITA). Eiffel gives me something quite similar (in a more elegant way) and additionally a "dumb" INTEGER (32 or 64 bit) and than a gazillion subtypes like "INTEGER_16". That's great because in a quick and dirty script a plain integer (max size of CPU) is good enough and keeps life simple. If I need days_of_month I can very easily have that as int type. Which (what is another important paradigm/philosophy issue) also touches the question "Is it all about data or about code?". A question that might seem very philosophical (and with C kind programmers tending to answer "code, of course!") but can get quickly get very pragmatic and stinking ugly when you find yourself f*cked by having to open a (e.g.) Microsoft spreadsheet created with thecurrent software version and you only have an older version ... Again, D (and even C++) *already has understood* those problems and created e.g. generics (such going a major step toward Eiffel and Co). They just implemented it "C style" with yet another cludge grafted on the maniacally protected C heritage and philosophy. Which, seen from another point of view can be an advantage (e.g. to C++ programmers wanting to feel home right away). Looking at it from a long term perspective ( I think) both sides will give in somewhat. Eiffel will be less ivory tower and "D++" or "E" (or possibly "F") will be less machine room minded, taking in a whole lot of Eiffel (and others) stuff, yet not forgetting embedded systems and the like. Such a step would, btw, not be a first. It has already happened to a large degree and here in D for another "world", namely functional mechanisms and fatures.
Aug 21 2013
parent reply "qznc" <qznc web.de> writes:
On Wednesday, 21 August 2013 at 16:21:47 UTC, Ramon wrote:
 As for generics, let me put it this way:
 In Eiffel generics have been an integral part of the language 
 design from the beginning. In D ways and mechanisms are 
 provided to achieve what quite usually is the goal of generics, 
 namely generic algorithms in way, i.e. by having to write code 
 for an algorithm just once. That might seem to be a minor 
 difference, in particular when looking from a "Huh? I can get 
 it done, so what's the fuss all about, eh?" perspective.
 Of course, there the C and successors worlds proponents are 
 right, this incurs a price (which templates do, too ...) and, 
 yes, in the end, somewhere someone or something must sort the 
 types out anyway (because of the way CPUs work).
There are basically two ways to implement generics. Type erasure (Java,Haskell) or template instantiation (C++,D). Instantiation provides better performance, but sacrifices error messages (fixable?), binary code size, and compilation modularity (template implementation must be available for instantiation). Type safety is not a problem in either approach. Longer form: http://beza1e1.tuxen.de/articles/generics.html An interesting twist would be to use type erasure for reference types and instantiation for value types. Another idea could be to use instantiation selectively as an optimization and erasure in general.
 Another example is data types, concretely integers. Ada offers 
 a nice way do precisely nail down precision/storage. If I want 
 to store days_of_month I can have an integer type holding ints 
 between 1 and 31 (which, due to the way they implemented it can 
 be a PITA). Eiffel gives me something quite similar (in a more 
 elegant way) and additionally a "dumb" INTEGER (32 or 64 bit) 
 and than a gazillion subtypes like "INTEGER_16". That's great 
 because in a quick and dirty script a plain integer (max size 
 of CPU) is good enough and keeps life simple. If I need 
 days_of_month I can very easily have that as int type.
In D you can use structs: struct days_of_month { int day; /* fill in operator overloading etc */ }
Aug 22 2013
next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Thursday, 22 August 2013 at 07:59:56 UTC, qznc wrote:
 On Wednesday, 21 August 2013 at 16:21:47 UTC, Ramon wrote:
 As for generics, let me put it this way:
 In Eiffel generics have been an integral part of the language 
 design from the beginning. In D ways and mechanisms are 
 provided to achieve what quite usually is the goal of 
 generics, namely generic algorithms in way, i.e. by having to 
 write code for an algorithm just once. That might seem to be a 
 minor difference, in particular when looking from a "Huh? I 
 can get it done, so what's the fuss all about, eh?" 
 perspective.
 Of course, there the C and successors worlds proponents are 
 right, this incurs a price (which templates do, too ...) and, 
 yes, in the end, somewhere someone or something must sort the 
 types out anyway (because of the way CPUs work).
There are basically two ways to implement generics. Type erasure (Java,Haskell) or template instantiation (C++,D). Instantiation provides better performance, but sacrifices error messages (fixable?), binary code size, and compilation modularity (template implementation must be available for instantiation). Type safety is not a problem in either approach. Longer form: http://beza1e1.tuxen.de/articles/generics.html An interesting twist would be to use type erasure for reference types and instantiation for value types. Another idea could be to use instantiation selectively as an optimization and erasure in general.
Which is the way .NET does it. http://blogs.msdn.com/b/carlos/archive/2009/11/09/net-generics-and-code-bloat-or-its-lack-thereof.aspx
 Another example is data types, concretely integers. Ada offers 
 a nice way do precisely nail down precision/storage. If I want 
 to store days_of_month I can have an integer type holding ints 
 between 1 and 31 (which, due to the way they implemented it 
 can be a PITA). Eiffel gives me something quite similar (in a 
 more elegant way) and additionally a "dumb" INTEGER (32 or 64 
 bit) and than a gazillion subtypes like "INTEGER_16". That's 
 great because in a quick and dirty script a plain integer (max 
 size of CPU) is good enough and keeps life simple. If I need 
 days_of_month I can very easily have that as int type.
In D you can use structs: struct days_of_month { int day; /* fill in operator overloading etc */ }
Thanks for the Eiffel info.
Aug 22 2013
prev sibling next sibling parent reply "Ramon" <spam thanks.no> writes:
On Thursday, 22 August 2013 at 07:59:56 UTC, qznc wrote:
 On Wednesday, 21 August 2013 at 16:21:47 UTC, Ramon wrote:
 As for generics, let me put it this way:
 In Eiffel generics have been an integral part of the language 
 design from the beginning. In D ways and mechanisms are 
 provided to achieve what quite usually is the goal of 
 generics, namely generic algorithms in way, i.e. by having to 
 write code for an algorithm just once. That might seem to be a 
 minor difference, in particular when looking from a "Huh? I 
 can get it done, so what's the fuss all about, eh?" 
 perspective.
 Of course, there the C and successors worlds proponents are 
 right, this incurs a price (which templates do, too ...) and, 
 yes, in the end, somewhere someone or something must sort the 
 types out anyway (because of the way CPUs work).
There are basically two ways to implement generics. Type erasure (Java,Haskell) or template instantiation (C++,D). Instantiation provides better performance, but sacrifices error messages (fixable?), binary code size, and compilation modularity (template implementation must be available for instantiation). Type safety is not a problem in either approach. Longer form: http://beza1e1.tuxen.de/articles/generics.html An interesting twist would be to use type erasure for reference types and instantiation for value types. Another idea could be to use instantiation selectively as an optimization and erasure in general.
 Another example is data types, concretely integers. Ada offers 
 a nice way do precisely nail down precision/storage. If I want 
 to store days_of_month I can have an integer type holding ints 
 between 1 and 31 (which, due to the way they implemented it 
 can be a PITA). Eiffel gives me something quite similar (in a 
 more elegant way) and additionally a "dumb" INTEGER (32 or 64 
 bit) and than a gazillion subtypes like "INTEGER_16". That's 
 great because in a quick and dirty script a plain integer (max 
 size of CPU) is good enough and keeps life simple. If I need 
 days_of_month I can very easily have that as int type.
In D you can use structs: struct days_of_month { int day; /* fill in operator overloading etc */ }
Thank you. Well in an OO language the actual type(s) is/are known. So real genericity boils done to whether an object has the required functions or not. D has, obviously piously following the C++ way (which can be a good thing), chosen to go the template way, that is, to handle it compile time. Other languages have chosen to do it runtime which is no worse or better per se but happens to be more consistent with OO. Some here argued that, well, in the end, say, a simple int and a bank account, need different data types and operations because it matters to CPUs whether it does sth. with a DWORD or a char[]. And, so they argued, therefore you have to pay a runtime penalty for real generics. I don't think so. Sure, one evidently pays a penalty for OO in general (as opposed to simple scalars). But it's not the genericity that costs. Last but not least, there simply isn't a either/or issue. Once can perfectly well have both. And no, that doesn't necessarily bring a performance penalty with it. Another point of view that doesn't match precisely but may help to understand it is this: True OOP is basically about "It's the *data*!" while systems programming understandably is closer to "it's the *code*!" Where the former has data "carrying the operations with them" the latter has data as something that is fed in and processed and spit out by the machinery. And it's that what brings up the question "Well, but how would the CPU know what kind of data it's working on? That requires expensive extra steps". Again, for systems programming that's just fine. But the whole penalty assumption largely stems from looking at true OOP through the C/C++ model.
Aug 22 2013
parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 22 August 2013 at 12:37:50 UTC, Ramon wrote:
 Well in an OO language the actual type(s) is/are known. So real 
 genericity boils done to whether an object has the required 
 functions or not.
Polymorphism say no, you don't know the actual type, and this is the whole point of OOP : being able to interact with object of various type as long as they provide the needed interface to interact with.
 D has, obviously piously following the C++ way (which can be a 
 good thing), chosen to go the template way, that is, to handle 
 it compile time. Other languages have chosen to do it runtime 
 which is no worse or better per se but happens to be more 
 consistent with OO.
D recognize that OO isn't the only paradigm on earth. And generic won't work with non OO code.
 Some here argued that, well, in the end, say, a simple int and 
 a bank account, need different data types and operations 
 because it matters to CPUs whether it does sth. with a DWORD or 
 a char[]. And, so they argued, therefore you have to pay a 
 runtime penalty for real generics. I don't think so.
 Sure, one evidently pays a penalty for OO in general (as 
 opposed to simple scalars). But it's not the genericity that 
 costs.
Indirections, opaque calls, and heap allocation are probably the 3 first performance killers on modern architecture. OO embrace the 3 of them.
 True OOP is basically about "It's the *data*!" while systems 
 programming understandably is closer to "it's the *code*!"
 Where the former has data "carrying the operations with them" 
 the latter has data as something that is fed in and processed 
 and spit out by the machinery. And it's that what brings up the 
 question "Well, but how would the CPU know what kind of data 
 it's working on? That requires expensive extra steps".
No true OOP is about behavioral abstraction. See Liskov's substitution principle. Data is merely a tool and OOP promote its encapsulation. In other term, data is an implementation detail in OOP.
 Again, for systems programming that's just fine. But the whole 
 penalty assumption largely stems from looking at true OOP 
 through the C/C++ model.
The whole penalty assumptions come from how actual compilers and CPU works. If you have new idea to revolution both and change the deal, great, but I highly doubt so.
Aug 22 2013
prev sibling parent reply "Brian Rogoff" <brogoff gmail.com> writes:
On Thursday, 22 August 2013 at 07:59:56 UTC, qznc wrote:
 There are basically two ways to implement generics. Type 
 erasure (Java,Haskell) or template instantiation (C++,D). 
 Instantiation provides better performance, but sacrifices error 
 messages (fixable?), binary code size, and compilation 
 modularity (template implementation must be available for 
 instantiation). Type safety is not a problem in either approach.
See this brief discussion from Greg Morrisett on the topic, with a finer subdivision of approaches http://www.eecs.harvard.edu/~greg/cs256sp2005/lec15.txt that confirms your bad news that monomorphization (C++/D templates) and separate compilation won't play well together. Nor do monomorphization and some advanced type system features work together, but that's less of a worry for D. That said, I like the D approach of putting a lot of power in the macro-like template system. I worry more about the reliance on GC in a systems programming language, as historically that's been a losing proposition. -- Brian
Aug 22 2013
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 22 August 2013 at 14:18:09 UTC, Brian Rogoff wrote:
 See this brief discussion from Greg Morrisett on the topic, 
 with a finer subdivision of approaches

 http://www.eecs.harvard.edu/~greg/cs256sp2005/lec15.txt

 that confirms your bad news that monomorphization (C++/D 
 templates) and separate compilation won't play well together. 
 Nor do monomorphization and some advanced type system features 
 work together, but that's less of a worry for D.
Well, in that paper they make a bit too hard statement - such model implies certain limitations for separate compilations (either explicit instantiation or having access to sources) but does not destroy completely. No silver bullet here, every approach has its own pros and cons. As have been discussed recently, ancient object file / linker tool stack harms it much more when comes to practice.
Aug 22 2013
next sibling parent reply "Ramon" <spam thanks.no> writes:
 http://www.eecs.harvard.edu/~greg/cs256sp2005/lec15.txt
I have quickly looked over that paper and find it quite worthless for a couple of reasons (I will not elaborate on, except one: All these scientific elaborations are nice and all but we have real problems here in the real world. Frankly, someone having worked on "Cyclone" may be bright and all, but he definitely hasn't got som major points). While Eiffel and D look (and in quite some respects are) quite different, both have actually realized some important real world problems and have them addressed in not so different conceptual ways. One of the reasons D is considerably more safe than C is strikingly simple and one doesn't need any scientific research to spot it: D offers (a quite sane implementation of) strings and "resizeable arrays". Why is this related to safety? Because zillions of bugs have been created by programmers desperately using, bending and abusing what was available in C (and to a degree in C++). Give them reasonable strings and some reasonable way to handle dynamic arrays and you have prevented gazillions of bugs. Simple as that. Another pragmatic look at reality underlines my point: In todays world even major software projects are being worked on by people with an almost brutally large degree of variation in their skills. Even worse, each of them (at least sometimes) has to work with code written by others with a (sometimes very) sifferent level of skills. D has made a very major contribution to safety alone for the fact that it allows less skilled people to create less errors. And it has something else that might not seem that big a thing, that however, started a (imo) very important path: safe, trusted and system. In C/C++ pretty everybody can - and does - play with potentially dangerous mechanism (like pointers) pretty much everywhere. Those simple 3 "code classes" safe, trusted and system can help a great deal there. One (OK, not very creative) example that comes to mind is to have less experienced programmers to work in "safe mode" only, which anyway is good enough for pretty everything the average app needs, and to limit "system mode" to seasoned programmers. Furthermore this D mechanism offers something else of high value by introducing a simple and important question "How much power do I need?". I might sound streetwise rather than properly educated, I know, but I have experienced again and again that what really counts is results. And I'm sure that D, if understood and used properly, contributes quite much to a very important result: Less bugs and more reliable code.
Aug 22 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 22 August 2013 at 15:42:15 UTC, Ramon wrote:
 One (OK, not very creative) example that comes to mind is to 
 have less experienced programmers to work in "safe mode" only, 
 which anyway is good enough for pretty everything the average 
 app needs, and to limit "system mode" to seasoned programmers.
If I was managing a D based team, I would definitely make use of safe/system for code reviews. Any commit that touches system code* would have to go through an extra stage or something to that effect.
Aug 22 2013
next sibling parent "Ramon" <spam thanks.no> writes:
On Thursday, 22 August 2013 at 15:50:50 UTC, John Colvin wrote:
 On Thursday, 22 August 2013 at 15:42:15 UTC, Ramon wrote:
 One (OK, not very creative) example that comes to mind is to 
 have less experienced programmers to work in "safe mode" only, 
 which anyway is good enough for pretty everything the average 
 app needs, and to limit "system mode" to seasoned programmers.
If I was managing a D based team, I would definitely make use of safe/system for code reviews. Any commit that touches system code* would have to go through an extra stage or something to that effect.
Yep. Considering that pretty every non-trivial/small utility software is layered anyway it comes even quite natural. But I (often involved in systems stuff) will also use it as a private warning system. Trying to get done whatever can get done using safe. Quite probably ("probably" because I lack experience with D) it will even reflect back on the design along the line of tinkering "does this *really* need to be here?".
Aug 22 2013
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Aug 22, 2013 at 05:50:49PM +0200, John Colvin wrote:
 On Thursday, 22 August 2013 at 15:42:15 UTC, Ramon wrote:
One (OK, not very creative) example that comes to mind is to have
less experienced programmers to work in "safe mode" only, which
anyway is good enough for pretty everything the average app needs,
and to limit "system mode" to seasoned programmers.
If I was managing a D based team, I would definitely make use of safe/system for code reviews. Any commit that touches system code* would have to go through an extra stage or something to that effect.
Are you sure about that? import std.stdio; void main() safe { writeln("abc"); } DMD says: /tmp/test.d(3): Error: safe function 'D main' cannot call system function 'std.stdio.writeln!(string).writeln' SafeD is a nice concept, I agree, but we have a ways to go before it's usable. T -- LINUX = Lousy Interface for Nefarious Unix Xenophobes.
Aug 22 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 22 August 2013 at 16:46:46 UTC, H. S. Teoh wrote:
 On Thu, Aug 22, 2013 at 05:50:49PM +0200, John Colvin wrote:
 On Thursday, 22 August 2013 at 15:42:15 UTC, Ramon wrote:
One (OK, not very creative) example that comes to mind is to 
have
less experienced programmers to work in "safe mode" only, 
which
anyway is good enough for pretty everything the average app 
needs,
and to limit "system mode" to seasoned programmers.
If I was managing a D based team, I would definitely make use of safe/system for code reviews. Any commit that touches system code* would have to go through an extra stage or something to that effect.
Are you sure about that? import std.stdio; void main() safe { writeln("abc"); } DMD says: /tmp/test.d(3): Error: safe function 'D main' cannot call system function 'std.stdio.writeln!(string).writeln' SafeD is a nice concept, I agree, but we have a ways to go before it's usable. T
Fair point. Why is that writeln can't be trusted?
Aug 22 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 22 August 2013 at 17:16:13 UTC, John Colvin wrote:
 On Thursday, 22 August 2013 at 16:46:46 UTC, H. S. Teoh wrote:
 On Thu, Aug 22, 2013 at 05:50:49PM +0200, John Colvin wrote:
 On Thursday, 22 August 2013 at 15:42:15 UTC, Ramon wrote:
One (OK, not very creative) example that comes to mind is to 
have
less experienced programmers to work in "safe mode" only, 
which
anyway is good enough for pretty everything the average app 
needs,
and to limit "system mode" to seasoned programmers.
If I was managing a D based team, I would definitely make use of safe/system for code reviews. Any commit that touches system code* would have to go through an extra stage or something to that effect.
Are you sure about that? import std.stdio; void main() safe { writeln("abc"); } DMD says: /tmp/test.d(3): Error: safe function 'D main' cannot call system function 'std.stdio.writeln!(string).writeln' SafeD is a nice concept, I agree, but we have a ways to go before it's usable. T
Fair point. Why is that writeln can't be trusted?
In the case of a string, that is.
Aug 22 2013
prev sibling parent "Brian Rogoff" <brogoff gmail.com> writes:
On Thursday, 22 August 2013 at 14:37:21 UTC, Dicebot wrote:
 On Thursday, 22 August 2013 at 14:18:09 UTC, Brian Rogoff wrote:
 See this brief discussion from Greg Morrisett on the topic, 
 with a finer subdivision of approaches

 http://www.eecs.harvard.edu/~greg/cs256sp2005/lec15.txt

 that confirms your bad news that monomorphization (C++/D 
 templates) and separate compilation won't play well together. 
 Nor do monomorphization and some advanced type system features 
 work together, but that's less of a worry for D.
Well, in that paper they make a bit too hard statement - such model implies certain limitations for separate compilations (either explicit instantiation or having access to sources) but does not destroy completely. No silver bullet here, every approach has its own pros and cons.
Yeah, I agree, there are probably some tricks to make some things better, but overall it's a good description of the problem. Like you say, each approach has tradeoffs. For a systems programming language, I think monomorphization is best. -- Brian
Aug 22 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-08-21 15:54, Ramon wrote:

 Reading through this forum, however, "performance" and not doing
 anything that might negatively influence performce is what Mother
 Mary is to Christians, very very holy.
In many areas D chooses safety or convenience before performance, compared with say C/C++. Examples: * All variables are default initialized * Garbage collector * All instance methods are virtual by default -- /Jacob Carlborg
Aug 21 2013
parent reply "Ramon" <spam thanks.no> writes:
On Wednesday, 21 August 2013 at 16:35:17 UTC, Jacob Carlborg 
wrote:
 On 2013-08-21 15:54, Ramon wrote:

 Reading through this forum, however, "performance" and not 
 doing
 anything that might negatively influence performce is what 
 Mother
 Mary is to Christians, very very holy.
In many areas D chooses safety or convenience before performance, compared with say C/C++. Examples: * All variables are default initialized * Garbage collector * All instance methods are virtual by default
You are right and I appreciate that progress (IMO). It does not exclude, however, a strong, sometimes possibly unbalanced, focus on performance, nor does it make the many cases vanish where performance considerations are brought up as major or even priority factor for or against diverse considerations, ideas or concepts. I am *not* against keeping an eye on performance, by no means. Looking at Moore's law, however, and at the kind of computing power available nowadays even in smartphones, not to talk about 8 and 12 core PCs, I feel that the importance of performance is way overestimated (possibly following a formertimes justified tradition). We need not look further than on our very desk. Basically all major OSes as well as all major software is riddled with bugs, problems, and considerable lack of security. - And - basically all major OSes and to a large degree software is written in - ? - languages of the C family. Coincidence? Although it must be noted in fairness that D would indeed and very considerably enhance that sad situation. D has - and should be appreciated for that - made major steps towards reliability and a world with less software bugs. I can't prove that, I don't have statistics for that but I'm very confident of D allowing for more secure and reliable software being written.
Aug 21 2013
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 21 August 2013 at 16:50:38 UTC, Ramon wrote:
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk about 
 8 and 12 core PCs, I feel that the importance of performance is 
 way overestimated (possibly following a formertimes justified 
 tradition).
Moor's law is kaput, finish, niet, we don't know how to use the extra transistor.
 We need not look further than on our very desk. Basically all 
 major OSes as well as all major software is riddled with bugs, 
 problems, and considerable lack of security. - And - basically 
 all major OSes and to a large degree software is written in - ? 
 - languages of the C family. Coincidence?
and many "safe" languages, and still are crippled with bugs. Some codebase are trully scary. Look at gdb's source code or gtk's. You want no bugs ? Go for Haskell. But you'll get no convenience or performance. The good thing if that if it does compile, you are pretty sure that it does the right thing.
Aug 21 2013
next sibling parent "Paulo Pinto" <pjmp progtools.org> writes:
On Wednesday, 21 August 2013 at 17:17:52 UTC, deadalnix wrote:
 On Wednesday, 21 August 2013 at 16:50:38 UTC, Ramon wrote:
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk 
 about 8 and 12 core PCs, I feel that the importance of 
 performance is way overestimated (possibly following a 
 formertimes justified tradition).
Moor's law is kaput, finish, niet, we don't know how to use the extra transistor.
 We need not look further than on our very desk. Basically all 
 major OSes as well as all major software is riddled with bugs, 
 problems, and considerable lack of security. - And - basically 
 all major OSes and to a large degree software is written in - 
 ? - languages of the C family. Coincidence?
PHP and many "safe" languages, and still are crippled with bugs. Some codebase are trully scary. Look at gdb's source code or gtk's. You want no bugs ? Go for Haskell. But you'll get no convenience or performance. The good thing if that if it does compile, you are pretty sure that it does the right thing.
While I agree with you, if C hadn't become mainstream the spectrum of bugs at the OS level would be much lower. Given the amount of errors caused by pointer arithmetic, buffer overflows and string manipulations. All of which are easily avoidable in systems programming languages from the early 80's. -- Paulo
Aug 21 2013
prev sibling next sibling parent reply "Ramon" <spam thanks.no> writes:
On Wednesday, 21 August 2013 at 17:17:52 UTC, deadalnix wrote:
 On Wednesday, 21 August 2013 at 16:50:38 UTC, Ramon wrote:
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk 
 about 8 and 12 core PCs, I feel that the importance of 
 performance is way overestimated (possibly following a 
 formertimes justified tradition).
Moor's law is kaput, finish, niet, we don't know how to use the extra transistor.
Even if that were true, we have gone quite some distance. Not even talking about Sparc T4 or 8-core X86, my smartphone is more powerful than what I had as computer 10 years or so ago.

 PHP and many "safe" languages, and still are crippled with bugs.
languages?
 Some codebase are trully scary. Look at gdb's source code or 
 gtk's.
Written in C/C++ ...
 You want no bugs ? Go for Haskell. But you'll get no 
 convenience or performance. The good thing if that if it does 
 compile, you are pretty sure that it does the right thing.
Why should I? Isn't that what D promises, too (and probably is right)? On another perspective: Consider this question "Would you be willing to have all your software (incl. OS) running 10% or even 20% slower but without bugs, leaks, (unintended) backdoors and the like?" My guess: Upwards of 80% would happily chime "YES!".
Aug 21 2013
next sibling parent reply "Tyler Jameson Little" <beatgammit gmail.com> writes:
On Wednesday, 21 August 2013 at 17:45:29 UTC, Ramon wrote:
 On Wednesday, 21 August 2013 at 17:17:52 UTC, deadalnix wrote:
 You want no bugs ? Go for Haskell. But you'll get no 
 convenience or performance. The good thing if that if it does 
 compile, you are pretty sure that it does the right thing.
Why should I? Isn't that what D promises, too (and probably is right)? On another perspective: Consider this question "Would you be willing to have all your software (incl. OS) running 10% or even 20% slower but without bugs, leaks, (unintended) backdoors and the like?" My guess: Upwards of 80% would happily chime "YES!".
Have you looked at Rust? It promises to solve a few of the memory-related problems mentioned: - no null pointer exceptions - deterministic free (with owned pointers) - optional garbage collection It also has generics, which are runtime generics if I'm not mistaken. It doesn't have inheritance in the traditional OO sense, so you may not like that. I really like that it's LLVM compiled, so performance and cross-compiling should be pretty much solved problems. There are still things that keep me here with D though: - templates instead of generics (little reason to take a performance hit) - CTFE - inheritance (though I hardly use classes, they're handy sometimes) - community - array operations (int[] a; int[]b; auto c = a * b;) - I don't think these are automagically SIMD'd, but there's always hope =D - similar to C++, so it's easy to find competent developers
Aug 21 2013
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 22 August 2013 at 02:06:13 UTC, Tyler Jameson Little 
wrote:
 It also has generics, which are runtime generics if I'm not 
 mistaken.
Both.
Aug 21 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Tyler Jameson Little:

 It also has generics, which are runtime generics if I'm not 
 mistaken.
 - templates instead of generics (little reason to take a 
 performance hit)
As far as I know Rust uses monomorphization just like C++ and D for generics. The difference in generics between D and Rust is that Rust has them strongly typed with type classes (this means inside a function templated on an argument, you can't do on that argument operations that are not specified in its static type class). But unlike the type classes of Haskell, the Rust ones are designed to have no run-time hit (but this makes them currently less powerful. Some persons are trying to improve this in Rust). Perhaps the original poster of this thread is looking for this. Bye, bearophile
Aug 21 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 22 August 2013 at 02:06:13 UTC, Tyler Jameson Little 
wrote:
 - array operations (int[] a; int[]b; auto c = a * b;)
   - I don't think these are automagically SIMD'd, but there's 
 always hope =D
That isn't allowed. The memory for c must be pre-allocated, and the expression then becomes c[] = a[] * b[]; Is it SIMD'd? It depends. There is a whole load of hand-written assembler for simple-ish expressions on builtin types, on x86. x86_64 is only supported with 32bit integer types because I haven't finished writing the rest yet... However, I'm not inclined to do so at the moment as we need a complete overhaul of that system anyway as it's currently a monster*. It needs to be re-implemented as a template instantiated by the compiler, using core.simd. Unfortunately it's not a priority for anyone right now AFAIK. * hand-written asm loops. If fully fleshed out there would be: ((aligned + unaligned + legacy mmx) * (x86 + x64) + fallback loop) * number of supported expressions * number of different types of them. Then there's unrolling considerations. See druntime/src/rt/arrayInt.d
Aug 22 2013
parent reply "Tyler Jameson Little" <beatgammit gmail.com> writes:
On Thursday, 22 August 2013 at 10:34:58 UTC, John Colvin wrote:
 On Thursday, 22 August 2013 at 02:06:13 UTC, Tyler Jameson 
 Little wrote:
 - array operations (int[] a; int[]b; auto c = a * b;)
  - I don't think these are automagically SIMD'd, but there's 
 always hope =D
That isn't allowed. The memory for c must be pre-allocated, and the expression then becomes c[] = a[] * b[];
Oops, that was what I meant.
 Is it SIMD'd?

 It depends. There is a whole load of hand-written assembler for 
 simple-ish expressions on builtin types, on x86. x86_64 is only 
 supported with 32bit integer types because I haven't finished 
 writing the rest yet...

 However, I'm not inclined to do so at the moment as we need a 
 complete overhaul of that system anyway as it's currently a 
 monster*.  It needs to be re-implemented as a template 
 instantiated by the compiler, using core.simd. Unfortunately 
 it's not a priority for anyone right now AFAIK.
That's fine. I was under the impression that it didn't SIMD at all, and that SIMD only works if explicitly stated. I assume this is something that can be done at runtime: int[] a = [1, 2, 3]; int[] b = [2, 2, 2]; auto c = a[] * b[]; // dynamically allocates on the stack; computes w/SIMD writeln(c); // prints [2, 4, 6] I haven't yet needed this, but it would be really nice... btw, it seems D does not have dynamic allocation. I know C99 does, so I know this is technically possible. Is this something we could get? If so, I'll start a thread about it.
 *
 hand-written asm loops. If fully fleshed out there would be:
   ((aligned + unaligned + legacy mmx) * (x86 + x64) + fallback 
 loop)
   * number of supported expressions * number of different types
 of them. Then there's unrolling considerations. See 
 druntime/src/rt/arrayInt.d
Aug 22 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 23 August 2013 at 05:28:22 UTC, Tyler Jameson Little 
wrote:
 I assume this is something that can be done at runtime:

     int[] a = [1, 2, 3];
     int[] b = [2, 2, 2];
     auto c = a[] * b[]; // dynamically allocates on the stack; 
 computes w/SIMD
     writeln(c); // prints [2, 4, 6]
This would be pretty trivial to implement but the question is whether it's a good idea: Heap allocation is out of the question as it's much too slow to be hidden behind what are supposed to be fast vector operations. Explicit runtime stack allocation could work, but it's not something we do much of in D. I know Maxime (https://github.com/maximecb/Higgs) uses alloca a bit, but if I remember correctly it wasn't all smooth going. What definitely should work, but currently doesn't, is this: int[3] a = [1, 2, 3]; int[3] b = [2, 2, 2]; auto c = a[] * b[]; // statically allocated on stack. as it can be totally taken care of statically by the type system. Actually I think it's just a straight up compiler bug, because this *does* work: int[3] a = [1, 2, 3]; int[3] b = [2, 2, 2]; int[3] c = a[] * b[]; // statically allocated on stack. It looks like it's rejecting the array op before it's worked out what to resolve auto to.
Aug 23 2013
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 21 August 2013 at 17:45:29 UTC, Ramon wrote:
 Moor's law is kaput, finish, niet, we don't know how to use 
 the extra transistor.
Even if that were true, we have gone quite some distance. Not even talking about Sparc T4 or 8-core X86, my smartphone is more powerful than what I had as computer 10 years or so ago.
Just read this this : ftp://ftp.cs.utexas.edu/pub/dburger/papers/ISCA11.pdf and come back informed.

 PHP and many "safe" languages, and still are crippled with 
 bugs.
"safe" languages?
They are dramatically superior to C in term of safety.
 Some codebase are trully scary. Look at gdb's source code or 
 gtk's.
Written in C/C++ ...
Well look at phpBB's source code then. Horrible codebase isn't language specific.
 You want no bugs ? Go for Haskell. But you'll get no 
 convenience or performance. The good thing if that if it does 
 compile, you are pretty sure that it does the right thing.
Why should I? Isn't that what D promises, too (and probably is right)?
D promise a pragmatic balance between safety, performance, ease of use, productivity, etc . . .
 On another perspective: Consider this question "Would you be 
 willing to have all your software (incl. OS) running 10% or 
 even 20% slower but without bugs, leaks, (unintended) backdoors 
 and the like?"

 My guess: Upwards of 80% would happily chime "YES!".
Would you accept it if it means a 3x slowdown and no real time capabilities (no video games for instance) ?
Aug 21 2013
next sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Thursday, 22 August 2013 at 05:22:17 UTC, deadalnix wrote:
 On Wednesday, 21 August 2013 at 17:45:29 UTC, Ramon wrote:
 Moor's law is kaput, finish, niet, we don't know how to use 
 the extra transistor.
Even if that were true, we have gone quite some distance. Not even talking about Sparc T4 or 8-core X86, my smartphone is more powerful than what I had as computer 10 years or so ago.
Just read this this : ftp://ftp.cs.utexas.edu/pub/dburger/papers/ISCA11.pdf and come back informed.

 PHP and many "safe" languages, and still are crippled with 
 bugs.
"safe" languages?
They are dramatically superior to C in term of safety.
Like Modula-2, Pascal dialects (mainly Turbo/Apple Pascal) and Ada are, but lost their place in developer's hearts. Now with D, Rust and even Go we can have another go at making system programming a better world. -- Paulo
Aug 21 2013
prev sibling next sibling parent "Ramon" <spam thanks.no> writes:
On Thursday, 22 August 2013 at 05:22:17 UTC, deadalnix wrote:
 Just read this this : 
 ftp://ftp.cs.utexas.edu/pub/dburger/papers/ISCA11.pdf and come 
 back informed.
Well, I can give you a link to some paper that says that the world will break down and stop next tuesday. Interested?

 PHP and many "safe" languages, and still are crippled with 
 bugs.
"safe" languages?
They are dramatically superior to C in term of safety.
I know "bridges" in Siberia that are vastly superior to bridges in the Andes. Frankly, I'd prefer to use a european bridge. And one *can* be in the C/C++ family and have a vastly safer system. Look at D.
 Some codebase are trully scary. Look at gdb's source code or 
 gtk's.
Written in C/C++ ...
Well look at phpBB's source code then. Horrible codebase isn't language specific.
So? Is this a "who knows most programs with lousy coding?" contest? All I see there is that programmers, in particular hobby hackers will spot - and use - any chance to wildly shoot around unless they are mildly (or less mildly) guided by a sound and safe system. And I see (and confess for myself) that even seasoned programmers can very much profit from a system that makes it easier to do the right thing and harder to do the wrong thing.
 You want no bugs ? Go for Haskell. But you'll get no 
 convenience or performance. The good thing if that if it does 
 compile, you are pretty sure that it does the right thing.
Why should I? Isn't that what D promises, too (and probably is right)?
D promise a pragmatic balance between safety, performance, ease of use, productivity, etc . . .
Well, being a systems programming language D is condemned to keep quite some doors open. It seems (as far as I can that now) however to have done an excellent job in terms of safety (give or take some minor sins like '=' as assignment). One might put Java against D. But frankly, I do not consider Javas approach "Subdue them with pervert bureaucracy, hehe" approach as acceptable (and it creates a whole set of problems, too). Frankly, if I had to work on a highly safety critical and reliable project (say in the medical area) I would have a hard time to spot just 5 languages that I would consider. Ada comes to mind (but I don't like it) and Eiffel, which is great but that great pragmatically. I'm afraid I'd end up where I ended up in the first place: Eiffel vs. D. I'm probably not counted as a happy D protagonist around here but I'd happily state that D is way ahead of 99% of the known languages. And that expressly includes safety.
 On another perspective: Consider this question "Would you be 
 willing to have all your software (incl. OS) running 10% or 
 even 20% slower but without bugs, leaks, (unintended) 
 backdoors and the like?"

 My guess: Upwards of 80% would happily chime "YES!".
Would you accept it if it means a 3x slowdown and no real time capabilities (no video games for instance) ?
I refuse to answer that because it's way out of reality.
Aug 22 2013
prev sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Thursday, 22 August 2013 at 05:22:17 UTC, deadalnix wrote:
 On Wednesday, 21 August 2013 at 17:45:29 UTC, Ramon wrote:
 On another perspective: Consider this question "Would you be 
 willing to have all your software (incl. OS) running 10% or 
 even 20% slower but without bugs, leaks, (unintended) 
 backdoors and the like?"

 My guess: Upwards of 80% would happily chime "YES!".
Would you accept it if it means a 3x slowdown and no real time capabilities (no video games for instance) ?
I would, because my experience with Native Oberon and AOS (Blue Bottle), teached me that those real time capabilities are possible in a desktop OS written in a GC enabled systems programming language. As example, BlueBottle has a video player, just the decoder has some snippets written in Assembly. http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager?action=download&upname=AosScreenshot1.jpg -- Paulo
Aug 22 2013
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/21/2013 07:17 PM, deadalnix wrote:
 You want no bugs ? Go for Haskell.
If you want no bugs, go for formal correctness proof.
 But you'll get no convenience
Yes you do. A lot.
 or performance.
Let's say "easily predictable performance".
 The good thing if that if it does compile, you are pretty
 sure that it does the right thing.
Aug 21 2013
prev sibling next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 21 Aug 2013 18:50:35 +0200
"Ramon" <spam thanks.no> wrote:
 
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk about 8 
 and 12 core PCs, I feel that the importance of performance is way 
 overestimated (possibly following a formertimes justified 
 tradition).
 
Even if we assume Moore's law is as alive and well as ever, a related note is that software tends to expand to fill the available computational power. When I can get slowdown in a text-entry box on a 64-bit multi-core, I know that hardware and Moore's law, practically speaking, have very little effect on real performance. At this point, it's code that affects performance far more than anything else. When we hail the great performance of modern web-as-a-platform by the fact that it allows an i7 or some such to run Quake as well as a Pentium 1 or 2 did, then we know Moore's law effectively counts for squat - performance is no longer about hardware, it's about not writing inefficient software. Now I'm certainly not saying that we should try to wring every last drop of performance out of every place where it doesn't even matter (like C++ tends to do). But software developers' belief in Moore's law has caused many of them to inadvertently cancel out, or even reverse, the hardware speedups with code inefficiencies (which are *easily* compoundable, and can and *do* exceed the 3x slowdown you claimed in another post was unrealistic) - and, as JS-heavy web apps prove, they haven't even gotten considerably more reliable as a result (Not that JS is a good example of a reliability-oriented language - but a lot of people certainly seem to think it is).
Aug 22 2013
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Aug 22, 2013 at 03:28:34PM -0400, Nick Sabalausky wrote:
 On Wed, 21 Aug 2013 18:50:35 +0200
 "Ramon" <spam thanks.no> wrote:
 
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk about 8 
 and 12 core PCs, I feel that the importance of performance is way 
 overestimated (possibly following a formertimes justified 
 tradition).
 
Even if we assume Moore's law is as alive and well as ever, a related note is that software tends to expand to fill the available computational power. When I can get slowdown in a text-entry box on a 64-bit multi-core, I know that hardware and Moore's law, practically speaking, have very little effect on real performance. At this point, it's code that affects performance far more than anything else. When we hail the great performance of modern web-as-a-platform by the fact that it allows an i7 or some such to run Quake as well as a Pentium 1 or 2 did, then we know Moore's law effectively counts for squat - performance is no longer about hardware, it's about not writing inefficient software.
I've often heard the argument that inefficiencies in code is OK, because you can just "ask the customer to upgrade to better hardware", and "nobody runs a 386 anymore". Which, from a business POV, is a profitable outlook -- if you're the one producing the hardware, inefficient software is incentive for the customer to pay you more money to buy faster hardware to run the software. On the contrary, if your software runs *too* well, then customers have no motivation to buy new hardware. This sometimes goes to ludicrous extremes, where an O(n^2) algorithm is justified because "the customer can just upgrade to better hardware", or "next year's CPU will be able to handle this no problem". Until they realize that when n is large (e.g., the customer says "oh I'm running your software with about n=8000), doubling the CPU speed every year just ain't gonna cut it -- you'd be waiting a long many years before your software becomes usable again.
 Now I'm certainly not saying that we should try to wring every last
 drop of performance out of every place where it doesn't even matter
 (like C++ tends to do). But software developers' belief in Moore's law
 has caused many of them to inadvertently cancel out, or even reverse,
 the hardware speedups with code inefficiencies (which are *easily*
 compoundable, and can and *do* exceed the 3x slowdown you claimed in
 another post was unrealistic) - and, as JS-heavy web apps prove, they
 haven't even gotten considerably more reliable as a result (Not that
 JS is a good example of a reliability-oriented language - but a lot of
 people certainly seem to think it is).
Heh. JS? reliable? in the same sentence? Heh. On the flip side, though, it's true that the performance-conscious crowd among programmers have a tendency to premature optimization, producing unmaintainable code in the process. I used to be one of them, so I know. :) A profiler is absolutely essential to identify where the real bottlenecks are. But once identified, sometimes there's no way to make it better except by going low-level and writing it in a systems programming language. Like D. ;-) And sometimes, there *is* no single bottleneck that you can address; you just need the code to be closer to hardware *in general* in order to bridge that last 10% performance gap to reach your target. All those convenient little indirections and virtual method lookups do add up. T -- It is widely believed that reinventing the wheel is a waste of time; but I disagree: without wheel reinventers, we would be still be stuck with wooden horse-cart wheels.
Aug 22 2013
prev sibling next sibling parent reply "Ramon" <spam thanks.no> writes:
On Thursday, 22 August 2013 at 19:28:42 UTC, Nick Sabalausky 
wrote:
 On Wed, 21 Aug 2013 18:50:35 +0200
 "Ramon" <spam thanks.no> wrote:
 
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk 
 about 8 and 12 core PCs, I feel that the importance of 
 performance is way overestimated (possibly following a 
 formertimes justified tradition).
 
Even if we assume Moore's law is as alive and well as ever, a related note is that software tends to expand to fill the available computational power. When I can get slowdown in a text-entry box on a 64-bit multi-core, I know that hardware and Moore's law, practically speaking, have very little effect on real performance. At this point, it's code that affects performance far more than anything else. When we hail the great performance of modern web-as-a-platform by the fact that it allows an i7 or some such to run Quake as well as a Pentium 1 or 2 did, then we know Moore's law effectively counts for squat - performance is no longer about hardware, it's about not writing inefficient software. Now I'm certainly not saying that we should try to wring every last drop of performance out of every place where it doesn't even matter (like C++ tends to do). But software developers' belief in Moore's law has caused many of them to inadvertently cancel out, or even reverse, the hardware speedups with code inefficiencies (which are *easily* compoundable, and can and *do* exceed the 3x slowdown you claimed in another post was unrealistic) - and, as JS-heavy web apps prove, they haven't even gotten considerably more reliable as a result (Not that JS is a good example of a reliability-oriented language - but a lot of people certainly seem to think it is).
I agree. However I feel we should differentiate: On one hand we have "Do not waste 3 bytes or 2 cycles!". I don't think that this a the, or at least not an adequate answer to doubts of Moore's laws eternal validity. On the other hand we have what is commonly referred to as "code bloat". *That's" the point where a diet seems promising and reasonable. And btw., there we are talking megabytes instead of 2 or 5 bytes. Now, while many look in the hobby-programmers corner for the culprits, I'm convinced the real culprits are graphics/gui and the merciless diktat of marketing. The latter because they *want* feature hungry customers and they don't give developers the time needed to *properly* implement, maintain, and repair those features. Often connected to the this is the graphical guys (connected because graphical is what sells; customers rarely pay for optimized algorithm implementations deep down in the code). Probably making myself new enemies I dare to say that gui, colourful and generally graphics is the area of lowest quality code. Simplyfying it somewhat and being blunt I'd state: Chances are that your server will hum along years without problems. If anything with a gui runs some hours without crashing and/or filling up stderr, you can consider yourself lucky. Not meaning to point fingers. But just these days I happened to fall over a new gui project. Guess what? They had a colourful presentation with lots of vanity, pipe dreams and, of course, colours. That matches quite well what I have experienced. Gui/graphical/colour stuff is probably the only area where people seriously do design with powerpoint. I guess that's how they tick. You can as well look at different developers. Guys developing for an embedded system are often hesitating to even use a RTOS. Your average non-gui, say server developer will use some libraries but he will ponder their advantages, size, dependencies and quality. Now enter the graphic world. John, creating some app, say, to collect, show, and sort photos will happily and very generously user whatever library sounds remotely usefull and doesn't run away fast enough. Results? An app on a microcontroller that, say, takes care of building management in some 10K. A server in some 100K. And the funny photo app with 12MB plus another 70MB libraries/dependencies. The embedded sytem will run forever and be forgotten, the server will be rebooted ever other year and the photo app will crash twice a day.
Aug 22 2013
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Aug 22, 2013 at 10:10:36PM +0200, Ramon wrote:
[...]
 Probably making myself new enemies I dare to say that gui, colourful
 and generally graphics is the area of lowest quality code.
 Simplyfying it somewhat and being blunt I'd state: Chances are that
 your server will hum along years without problems. If anything with
 a gui runs some hours without crashing and/or filling up stderr, you
 can consider yourself lucky.
 Not meaning to point fingers. But just these days I happened to fall
 over a new gui project. Guess what? They had a colourful
 presentation with lots of vanity, pipe dreams and, of course,
 colours. That matches quite well what I have experienced.
 Gui/graphical/colour stuff is probably the only area where people
 seriously do design with powerpoint. I guess that's how they tick.
This is also my experience. :) And I don't mean to diss anyone working with GUI code either, but it's true that in the commercial projects that I'm involved in, the GUI component is where the code has rather poor quality. So poor, in fact, that I dread having to look at it at all -- I try to fix the problem in the low-level modules if at all possible, rather than spend 5 days trying to follow the spaghetti code in the GUI module. (Or rather, lasagna code -- some time ago they ditched the old spaghetti-code source base, and rewrote the whole thing from ground up using a class hierarchy -- I suppose in the hope that the code would be cleaner that way. Well, the spaghetti is gone, but now the code that does the real work is buried so deeply under who knows how many layers of abstractions, most of which are not properly designed and very leaky, that a single method call can literally do *anything*. The only reliable way to know what it actually does is to set a breakpoint in the debugger, because it has been overloaded everywhere in the most non-obvious places and nobody knows where the call will actually end up.)
 You can as well look at different developers. Guys developing for an
 embedded system are often hesitating to even use a RTOS. Your
 average non-gui, say server developer will use some libraries but he
 will ponder their advantages, size, dependencies and quality. Now
 enter the graphic world. John, creating some app, say, to collect,
 show, and sort photos will happily and very generously user whatever
 library sounds remotely usefull and doesn't run away fast enough.
 
 Results? An app on a microcontroller that, say, takes care of
 building management in some 10K. A server in some 100K. And the
 funny photo app with 12MB plus another 70MB libraries/dependencies.
 The embedded sytem will run forever and be forgotten, the server
 will be rebooted ever other year and the photo app will crash twice
 a day.
LOL... totally sums up my sentiments w.r.t. GUI-dependent apps. :) I saw through this façade decades ago when Windows 3.1 first came out, and I've hated GUI-based OSes ever since. I stuck to DOS as long as I could through win95 and win98, and then I learned about Linux and I jumped ship and never looked back. But X11 isn't that much better... there are some pretty bloated X11 apps that crash twice a day, too. Sometimes twice an hour. After repeated experiences like that, I decided that CLI is still the most reliable, and far more expressive to begin with. CLI-based apps are generally far more stable, require far less resources, and are *scriptable* and composable, something that GUI apps could never do (or if they could, not very well). I concluded that the only time GUIs are appropriate is when (1) you're working with graphical data like image editing or visualization, and (2) games. I found that (1) is actually doable with CLI tools like imagemagick, and I rarely do (2) anyway. So I dumped my mouse-based window manager for ratpoison, and use my X11 as a glorified text terminal, and now I'm as happy as can be. :-P Or rather, I *will* be happy as can be once I find a suitable replacement for a browser. Browsers are by far the most ridiculously resource-consuming beasts ever, given that all they do is to display some text and graphics and let you click on stuff. On my office PC, the browser is often the one cause of long compile times when its memory-hungry resource grabbing clashes with the linker trying to link (guess what?) the GUI module of the project. RAM-hungry, IO-bound browser + linker linking gigantic bloated object files of GUI module = 30-minute coffee break while I watch the equally RAM-hungry X server paint the screen pixel-by-pixel as the hard drive thrashes itself to death. :-P This 30 minutes easily turns into 1 hour if I actually dare to run two browsers simultaneously (y'know, for debugging purposes -- what I would give to be rid of the responsibility of testing different browsers...). T -- Always remember that you are unique. Just like everybody else. -- despair.com
Aug 22 2013
next sibling parent reply "Ramon" <spam thanks.no> writes:
On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
 Or rather, I *will* be happy as can be once I find a suitable
 replacement for a browser. Browsers are by far the most 
 ridiculously
 resource-consuming beasts ever, given that all they do is to 
 display
 some text and graphics and let you click on stuff.

 T
Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable. I assume you know links2 and w3m, both textmode browsers which support tables, frames, and even images. links2 (or was it elinks?) even supported javascript for some time. You also might like that links by default is non-graphic and needs a commandline switch to go graphical. R
Aug 22 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Aug 23, 2013 at 05:06:01AM +0200, Ramon wrote:
 On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
Or rather, I *will* be happy as can be once I find a suitable
replacement for a browser. Browsers are by far the most ridiculously
resource-consuming beasts ever, given that all they do is to display
some text and graphics and let you click on stuff.

T
Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable.
I'm installing it right now. Let's see if it lives up to its promise. ;-) If it does, I'm ditching opera 12 (the last tolerable version; the latest version, opera 15, has lost everything that made opera opera, and I've no desire to stay with opera) and switching over. I'll keep firefox handy for when bloated features are required, there should be plenty of RAM leftover if xombrero isn't as memory-hogging as opera can be. :-P
 I assume you know links2 and w3m, both textmode browsers which
 support tables, frames, and even images. links2 (or was it elinks?)
 even supported javascript for some time.
 You also might like that links by default is non-graphic and needs a
 commandline switch to go graphical.
[...] I use elinks every now and then... I can't say I'm that impressed with its interface, to be honest. There are better ways of doing text mode browser UIs. Plus, most sites look trashy in elinks because they're all designed with bloated GUIs in mind. As for JS, nowadays I turn it off by default anyway, and only enable it when it's actually needed. Makes the web noticeably faster and, in many cases, more pleasant to use. (*cough*dlang.org*cough*) T -- Caffeine underflow. Brain dumped.
Aug 22 2013
parent reply "Chris" <wendlec tcd.ie> writes:
On Friday, 23 August 2013 at 03:47:00 UTC, H. S. Teoh wrote:
 On Fri, Aug 23, 2013 at 05:06:01AM +0200, Ramon wrote:
 On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
Or rather, I *will* be happy as can be once I find a suitable
replacement for a browser. Browsers are by far the most 
ridiculously
resource-consuming beasts ever, given that all they do is to 
display
some text and graphics and let you click on stuff.

T
Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable.
I'm installing it right now. Let's see if it lives up to its promise. ;-) If it does, I'm ditching opera 12 (the last tolerable version; the latest version, opera 15, has lost everything that made opera opera, and I've no desire to stay with opera) and switching over. I'll keep firefox handy for when bloated features are required, there should be plenty of RAM leftover if xombrero isn't as memory-hogging as opera can be. :-P
 I assume you know links2 and w3m, both textmode browsers which
 support tables, frames, and even images. links2 (or was it 
 elinks?)
 even supported javascript for some time.
 You also might like that links by default is non-graphic and 
 needs a
 commandline switch to go graphical.
[...] I use elinks every now and then... I can't say I'm that impressed with its interface, to be honest. There are better ways of doing text mode browser UIs. Plus, most sites look trashy in elinks because they're all designed with bloated GUIs in mind. As for JS, nowadays I turn it off by default anyway, and only enable it when it's actually needed. Makes the web noticeably faster and, in many cases, more pleasant to use. (*cough*dlang.org*cough*) T
I've been testing xombrero for a few days now and I really like it. It's fast and it's up to most of the ordinary web browsing tasks. Thanks for the tip. It crashed once, though, when trying to open a PDF file. Apart from that, it's a good UI for just browsing.
Sep 05 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 05, 2013 at 11:11:14AM +0200, Chris wrote:
 On Friday, 23 August 2013 at 03:47:00 UTC, H. S. Teoh wrote:
On Fri, Aug 23, 2013 at 05:06:01AM +0200, Ramon wrote:
On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
Or rather, I *will* be happy as can be once I find a suitable
replacement for a browser. Browsers are by far the most
ridiculously resource-consuming beasts ever, given that all they do
is to display some text and graphics and let you click on stuff.

T
Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable.
I'm installing it right now. Let's see if it lives up to its promise. ;-)
[...]
 I've been testing xombrero for a few days now and I really like it.
 It's fast and it's up to most of the ordinary web browsing tasks.
 Thanks for the tip. It crashed once, though, when trying to open a
 PDF file. Apart from that, it's a good UI for just browsing.
Hmm. I built xombrero from git, and it seems to be unable to connect to anything. There are no error messages, no nothing -- it just displays the loading icon animation and sits there looking cute but doing absolutely nothing. AFAICT, it didn't even send any packets out to the network. What gives? Am I missing some library, or did I screw up some configuration...? T -- Indifference will certainly be the downfall of mankind, but who cares? -- Miquel van Smoorenburg
Sep 05 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-09-05 20:12, H. S. Teoh wrote:

 Hmm. I built xombrero from git, and it seems to be unable to connect to
 anything. There are no error messages, no nothing -- it just displays
 the loading icon animation and sits there looking cute but doing
 absolutely nothing. AFAICT, it didn't even send any packets out to the
 network. What gives? Am I missing some library, or did I screw up some
 configuration...?
The most secure web browser :) -- /Jacob Carlborg
Sep 05 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 05, 2013 at 08:37:36PM +0200, Jacob Carlborg wrote:
 On 2013-09-05 20:12, H. S. Teoh wrote:
 
Hmm. I built xombrero from git, and it seems to be unable to connect
to anything. There are no error messages, no nothing -- it just
displays the loading icon animation and sits there looking cute but
doing absolutely nothing. AFAICT, it didn't even send any packets out
to the network. What gives? Am I missing some library, or did I screw
up some configuration...?
The most secure web browser :)
[...] lol... T -- Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language. -- Luca De Vitis
Sep 05 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 5 September 2013 at 18:13:42 UTC, H. S. Teoh wrote:
 On Thu, Sep 05, 2013 at 11:11:14AM +0200, Chris wrote:
 On Friday, 23 August 2013 at 03:47:00 UTC, H. S. Teoh wrote:
On Fri, Aug 23, 2013 at 05:06:01AM +0200, Ramon wrote:
On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh 
wrote:
Or rather, I *will* be happy as can be once I find a 
suitable
replacement for a browser. Browsers are by far the most
ridiculously resource-consuming beasts ever, given that all 
they do
is to display some text and graphics and let you click on 
stuff.

T
Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable.
I'm installing it right now. Let's see if it lives up to its promise. ;-)
[...]
 I've been testing xombrero for a few days now and I really 
 like it.
 It's fast and it's up to most of the ordinary web browsing 
 tasks.
 Thanks for the tip. It crashed once, though, when trying to 
 open a
 PDF file. Apart from that, it's a good UI for just browsing.
Hmm. I built xombrero from git, and it seems to be unable to connect to anything. There are no error messages, no nothing -- it just displays the loading icon animation and sits there looking cute but doing absolutely nothing. AFAICT, it didn't even send any packets out to the network. What gives? Am I missing some library, or did I screw up some configuration...? T
Not sure what you might be lacking to make it work. I built from git and it just worked. What does ldd show for the executable?
Sep 05 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 05, 2013 at 08:57:51PM +0200, John Colvin wrote:
 On Thursday, 5 September 2013 at 18:13:42 UTC, H. S. Teoh wrote:
On Thu, Sep 05, 2013 at 11:11:14AM +0200, Chris wrote:
[...]
I've been testing xombrero for a few days now and I really like it.
It's fast and it's up to most of the ordinary web browsing tasks.
Thanks for the tip. It crashed once, though, when trying to open a
PDF file. Apart from that, it's a good UI for just browsing.
Hmm. I built xombrero from git, and it seems to be unable to connect to anything. There are no error messages, no nothing -- it just displays the loading icon animation and sits there looking cute but doing absolutely nothing. AFAICT, it didn't even send any packets out to the network. What gives? Am I missing some library, or did I screw up some configuration...? T
Not sure what you might be lacking to make it work. I built from git and it just worked. What does ldd show for the executable?
linux-vdso.so.1 (0x00007fffd7dff000) libwebkitgtk-3.0.so.0 => /usr/lib/libwebkitgtk-3.0.so.0 (0x00007f6dc089e000) libgtk-3.so.0 => /usr/lib/x86_64-linux-gnu/libgtk-3.so.0 (0x00007f6dc01cc000) libjavascriptcoregtk-3.0.so.0 => /usr/lib/libjavascriptcoregtk-3.0.so.0 (0x00007f6dbfa87000) libgdk-3.so.0 => /usr/lib/x86_64-linux-gnu/libgdk-3.so.0 (0x00007f6dbf801000) libatk-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libatk-1.0.so.0 (0x00007f6dbf5de000) libpangocairo-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpangocairo-1.0.so.0 (0x00007f6dbf3d0000) libgdk_pixbuf-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgdk_pixbuf-2.0.so.0 (0x00007f6dbf1b0000) libcairo-gobject.so.2 => /usr/lib/x86_64-linux-gnu/libcairo-gobject.so.2 (0x00007f6dbefa7000) libpango-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpango-1.0.so.0 (0x00007f6dbed58000) libcairo.so.2 => /usr/lib/x86_64-linux-gnu/libcairo.so.2 (0x00007f6dbea3f000) libsoup-2.4.so.1 => /usr/lib/x86_64-linux-gnu/libsoup-2.4.so.1 (0x00007f6dbe77d000) libgio-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0 (0x00007f6dbe421000) libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007f6dbe1d1000) libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f6dbded2000) libgnutls.so.26 => /usr/lib/x86_64-linux-gnu/libgnutls.so.26 (0x00007f6dbdc12000) libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f6dbda03000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6dbd7ff000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f6dbd4c3000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6dbd2a7000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6dbcefb000) libenchant.so.1 => /usr/lib/x86_64-linux-gnu/libenchant.so.1 (0x00007f6dbccee000) libharfbuzz-icu.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz-icu.so.0 (0x00007f6dbcaeb000) libharfbuzz.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0 (0x00007f6dbc898000) libgailutil-3.so.0 => /usr/lib/x86_64-linux-gnu/libgailutil-3.so.0 (0x00007f6dbc68e000) libgeoclue.so.0 => /usr/lib/x86_64-linux-gnu/libgeoclue.so.0 (0x00007f6dbc477000) libdbus-glib-1.so.2 => /usr/lib/x86_64-linux-gnu/libdbus-glib-1.so.2 (0x00007f6dbc250000) libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f6dbc00a000) libgstapp-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstapp-1.0.so.0 (0x00007f6dbbdfe000) libgstaudio-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstaudio-1.0.so.0 (0x00007f6dbbbb5000) libgstfft-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstfft-1.0.so.0 (0x00007f6dbb9aa000) libgstpbutils-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstpbutils-1.0.so.0 (0x00007f6dbb786000) libgstvideo-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstvideo-1.0.so.0 (0x00007f6dbb545000) libgstbase-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0 (0x00007f6dbb2f3000) libgstreamer-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 (0x00007f6dbaffe000) libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007f6dbadfa000) libgthread-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0 (0x00007f6dbabf7000) libjpeg.so.8 => /usr/lib/x86_64-linux-gnu/libjpeg.so.8 (0x00007f6dba9bd000) libsecret-1.so.0 => /usr/lib/x86_64-linux-gnu/libsecret-1.so.0 (0x00007f6dba76d000) libxslt.so.1 => /usr/lib/x86_64-linux-gnu/libxslt.so.1 (0x00007f6dba52d000) libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f6dba1c6000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f6db9f68000) libEGL.so.1 => /usr/lib/x86_64-linux-gnu/libEGL.so.1 (0x00007f6db9d45000) libpangoft2-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0 (0x00007f6db9b2f000) libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f6db9890000) libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f6db9654000) libpng12.so.0 => /lib/x86_64-linux-gnu/libpng12.so.0 (0x00007f6db942d000) libsqlite3.so.0 => /usr/lib/x86_64-linux-gnu/libsqlite3.so.0 (0x00007f6db917c000) libicui18n.so.48 => /usr/lib/x86_64-linux-gnu/libicui18n.so.48 (0x00007f6db8daf000) libicuuc.so.48 => /usr/lib/x86_64-linux-gnu/libicuuc.so.48 (0x00007f6db8a40000) libicudata.so.48 => /usr/lib/x86_64-linux-gnu/libicudata.so.48 (0x00007f6db76d0000) libwebp.so.4 => /usr/lib/x86_64-linux-gnu/libwebp.so.4 (0x00007f6db747f000) libXcomposite.so.1 => /usr/lib/x86_64-linux-gnu/libXcomposite.so.1 (0x00007f6db727c000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f6db7079000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f6db6e72000) libXrender.so.1 => /usr/lib/x86_64-linux-gnu/libXrender.so.1 (0x00007f6db6c68000) libXt.so.6 => /usr/lib/x86_64-linux-gnu/libXt.so.6 (0x00007f6db6a02000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f6db67e9000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6db65e1000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6db62de000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6db5fdf000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6db5dc9000) libXi.so.6 => /usr/lib/x86_64-linux-gnu/libXi.so.6 (0x00007f6db5bb8000) libatk-bridge-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libatk-bridge-2.0.so.0 (0x00007f6db5987000) libXinerama.so.1 => /usr/lib/x86_64-linux-gnu/libXinerama.so.1 (0x00007f6db5783000) libXrandr.so.2 => /usr/lib/x86_64-linux-gnu/libXrandr.so.2 (0x00007f6db5579000) libXcursor.so.1 => /usr/lib/x86_64-linux-gnu/libXcursor.so.1 (0x00007f6db536e000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f6db515b000) libthai.so.0 => /usr/lib/x86_64-linux-gnu/libthai.so.0 (0x00007f6db4f52000) libpixman-1.so.0 => /usr/lib/x86_64-linux-gnu/libpixman-1.so.0 (0x00007f6db4ca9000) libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f6db4aa6000) libxcb-render.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-render.so.0 (0x00007f6db489c000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f6db467c000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f6db445a000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f6db4242000) libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f6db403a000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f6db3dfc000) libgcrypt.so.11 => /lib/x86_64-linux-gnu/libgcrypt.so.11 (0x00007f6db3b7c000) libtasn1.so.3 => /usr/lib/x86_64-linux-gnu/libtasn1.so.3 (0x00007f6db396b000) libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007f6db374b000) /lib64/ld-linux-x86-64.so.2 (0x00007f6dc2716000) libgraphite2.so.3 => /usr/lib/x86_64-linux-gnu/libgraphite2.so.3 (0x00007f6db3530000) libgsttag-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgsttag-1.0.so.0 (0x00007f6db32f9000) liborc-0.4.so.0 => /usr/lib/x86_64-linux-gnu/liborc-0.4.so.0 (0x00007f6db3072000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f6db2e4f000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f6db2c28000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f6db2a26000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f6db280e000) libxcb-dri2.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-dri2.so.0 (0x00007f6db2608000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f6db2402000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f6db21f6000) libxcb-xfixes.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-xfixes.so.0 (0x00007f6db1fee000) libxcb-shape.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shape.so.0 (0x00007f6db1dea000) libwayland-client.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-client.so.0 (0x00007f6db1bde000) libwayland-server.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-server.so.0 (0x00007f6db19cb000) libgbm.so.1 => /usr/lib/x86_64-linux-gnu/libgbm.so.1 (0x00007f6db17c5000) libudev.so.0 => /lib/x86_64-linux-gnu/libudev.so.0 (0x00007f6db15b6000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f6db138b000) libSM.so.6 => /usr/lib/x86_64-linux-gnu/libSM.so.6 (0x00007f6db1184000) libICE.so.6 => /usr/lib/x86_64-linux-gnu/libICE.so.6 (0x00007f6db0f68000) libatspi.so.0 => /usr/lib/x86_64-linux-gnu/libatspi.so.0 (0x00007f6db0d36000) libdatrie.so.1 => /usr/lib/x86_64-linux-gnu/libdatrie.so.1 (0x00007f6db0b2e000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f6db092a000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f6db0725000) libgpg-error.so.0 => /usr/lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007f6db051f000) libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f6db0518000) Not sure how this would help, it seems pretty normal for your average typical bloated GUI app... T -- Don't modify spaghetti code unless you can eat the consequences.
Sep 05 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 5 September 2013 at 19:17:54 UTC, H. S. Teoh wrote:
 On Thu, Sep 05, 2013 at 08:57:51PM +0200, John Colvin wrote:
 On Thursday, 5 September 2013 at 18:13:42 UTC, H. S. Teoh 
 wrote:
On Thu, Sep 05, 2013 at 11:11:14AM +0200, Chris wrote:
[...]
I've been testing xombrero for a few days now and I really 
like it.
It's fast and it's up to most of the ordinary web browsing 
tasks.
Thanks for the tip. It crashed once, though, when trying to 
open a
PDF file. Apart from that, it's a good UI for just browsing.
Hmm. I built xombrero from git, and it seems to be unable to connect to anything. There are no error messages, no nothing -- it just displays the loading icon animation and sits there looking cute but doing absolutely nothing. AFAICT, it didn't even send any packets out to the network. What gives? Am I missing some library, or did I screw up some configuration...? T
Not sure what you might be lacking to make it work. I built from git and it just worked. What does ldd show for the executable?
linux-vdso.so.1 (0x00007fffd7dff000) libwebkitgtk-3.0.so.0 => /usr/lib/libwebkitgtk-3.0.so.0 (0x00007f6dc089e000) libgtk-3.so.0 => /usr/lib/x86_64-linux-gnu/libgtk-3.so.0 (0x00007f6dc01cc000) libjavascriptcoregtk-3.0.so.0 => /usr/lib/libjavascriptcoregtk-3.0.so.0 (0x00007f6dbfa87000) libgdk-3.so.0 => /usr/lib/x86_64-linux-gnu/libgdk-3.so.0 (0x00007f6dbf801000) libatk-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libatk-1.0.so.0 (0x00007f6dbf5de000) libpangocairo-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpangocairo-1.0.so.0 (0x00007f6dbf3d0000) libgdk_pixbuf-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgdk_pixbuf-2.0.so.0 (0x00007f6dbf1b0000) libcairo-gobject.so.2 => /usr/lib/x86_64-linux-gnu/libcairo-gobject.so.2 (0x00007f6dbefa7000) libpango-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpango-1.0.so.0 (0x00007f6dbed58000) libcairo.so.2 => /usr/lib/x86_64-linux-gnu/libcairo.so.2 (0x00007f6dbea3f000) libsoup-2.4.so.1 => /usr/lib/x86_64-linux-gnu/libsoup-2.4.so.1 (0x00007f6dbe77d000) libgio-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0 (0x00007f6dbe421000) libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007f6dbe1d1000) libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f6dbded2000) libgnutls.so.26 => /usr/lib/x86_64-linux-gnu/libgnutls.so.26 (0x00007f6dbdc12000) libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f6dbda03000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6dbd7ff000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f6dbd4c3000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6dbd2a7000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6dbcefb000) libenchant.so.1 => /usr/lib/x86_64-linux-gnu/libenchant.so.1 (0x00007f6dbccee000) libharfbuzz-icu.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz-icu.so.0 (0x00007f6dbcaeb000) libharfbuzz.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0 (0x00007f6dbc898000) libgailutil-3.so.0 => /usr/lib/x86_64-linux-gnu/libgailutil-3.so.0 (0x00007f6dbc68e000) libgeoclue.so.0 => /usr/lib/x86_64-linux-gnu/libgeoclue.so.0 (0x00007f6dbc477000) libdbus-glib-1.so.2 => /usr/lib/x86_64-linux-gnu/libdbus-glib-1.so.2 (0x00007f6dbc250000) libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f6dbc00a000) libgstapp-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstapp-1.0.so.0 (0x00007f6dbbdfe000) libgstaudio-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstaudio-1.0.so.0 (0x00007f6dbbbb5000) libgstfft-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstfft-1.0.so.0 (0x00007f6dbb9aa000) libgstpbutils-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstpbutils-1.0.so.0 (0x00007f6dbb786000) libgstvideo-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstvideo-1.0.so.0 (0x00007f6dbb545000) libgstbase-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0 (0x00007f6dbb2f3000) libgstreamer-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 (0x00007f6dbaffe000) libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007f6dbadfa000) libgthread-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0 (0x00007f6dbabf7000) libjpeg.so.8 => /usr/lib/x86_64-linux-gnu/libjpeg.so.8 (0x00007f6dba9bd000) libsecret-1.so.0 => /usr/lib/x86_64-linux-gnu/libsecret-1.so.0 (0x00007f6dba76d000) libxslt.so.1 => /usr/lib/x86_64-linux-gnu/libxslt.so.1 (0x00007f6dba52d000) libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f6dba1c6000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f6db9f68000) libEGL.so.1 => /usr/lib/x86_64-linux-gnu/libEGL.so.1 (0x00007f6db9d45000) libpangoft2-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0 (0x00007f6db9b2f000) libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f6db9890000) libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f6db9654000) libpng12.so.0 => /lib/x86_64-linux-gnu/libpng12.so.0 (0x00007f6db942d000) libsqlite3.so.0 => /usr/lib/x86_64-linux-gnu/libsqlite3.so.0 (0x00007f6db917c000) libicui18n.so.48 => /usr/lib/x86_64-linux-gnu/libicui18n.so.48 (0x00007f6db8daf000) libicuuc.so.48 => /usr/lib/x86_64-linux-gnu/libicuuc.so.48 (0x00007f6db8a40000) libicudata.so.48 => /usr/lib/x86_64-linux-gnu/libicudata.so.48 (0x00007f6db76d0000) libwebp.so.4 => /usr/lib/x86_64-linux-gnu/libwebp.so.4 (0x00007f6db747f000) libXcomposite.so.1 => /usr/lib/x86_64-linux-gnu/libXcomposite.so.1 (0x00007f6db727c000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f6db7079000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f6db6e72000) libXrender.so.1 => /usr/lib/x86_64-linux-gnu/libXrender.so.1 (0x00007f6db6c68000) libXt.so.6 => /usr/lib/x86_64-linux-gnu/libXt.so.6 (0x00007f6db6a02000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f6db67e9000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6db65e1000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f6db62de000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6db5fdf000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6db5dc9000) libXi.so.6 => /usr/lib/x86_64-linux-gnu/libXi.so.6 (0x00007f6db5bb8000) libatk-bridge-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libatk-bridge-2.0.so.0 (0x00007f6db5987000) libXinerama.so.1 => /usr/lib/x86_64-linux-gnu/libXinerama.so.1 (0x00007f6db5783000) libXrandr.so.2 => /usr/lib/x86_64-linux-gnu/libXrandr.so.2 (0x00007f6db5579000) libXcursor.so.1 => /usr/lib/x86_64-linux-gnu/libXcursor.so.1 (0x00007f6db536e000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f6db515b000) libthai.so.0 => /usr/lib/x86_64-linux-gnu/libthai.so.0 (0x00007f6db4f52000) libpixman-1.so.0 => /usr/lib/x86_64-linux-gnu/libpixman-1.so.0 (0x00007f6db4ca9000) libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f6db4aa6000) libxcb-render.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-render.so.0 (0x00007f6db489c000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f6db467c000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f6db445a000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f6db4242000) libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f6db403a000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f6db3dfc000) libgcrypt.so.11 => /lib/x86_64-linux-gnu/libgcrypt.so.11 (0x00007f6db3b7c000) libtasn1.so.3 => /usr/lib/x86_64-linux-gnu/libtasn1.so.3 (0x00007f6db396b000) libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007f6db374b000) /lib64/ld-linux-x86-64.so.2 (0x00007f6dc2716000) libgraphite2.so.3 => /usr/lib/x86_64-linux-gnu/libgraphite2.so.3 (0x00007f6db3530000) libgsttag-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libgsttag-1.0.so.0 (0x00007f6db32f9000) liborc-0.4.so.0 => /usr/lib/x86_64-linux-gnu/liborc-0.4.so.0 (0x00007f6db3072000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f6db2e4f000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f6db2c28000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f6db2a26000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f6db280e000) libxcb-dri2.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-dri2.so.0 (0x00007f6db2608000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f6db2402000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f6db21f6000) libxcb-xfixes.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-xfixes.so.0 (0x00007f6db1fee000) libxcb-shape.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shape.so.0 (0x00007f6db1dea000) libwayland-client.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-client.so.0 (0x00007f6db1bde000) libwayland-server.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-server.so.0 (0x00007f6db19cb000) libgbm.so.1 => /usr/lib/x86_64-linux-gnu/libgbm.so.1 (0x00007f6db17c5000) libudev.so.0 => /lib/x86_64-linux-gnu/libudev.so.0 (0x00007f6db15b6000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f6db138b000) libSM.so.6 => /usr/lib/x86_64-linux-gnu/libSM.so.6 (0x00007f6db1184000) libICE.so.6 => /usr/lib/x86_64-linux-gnu/libICE.so.6 (0x00007f6db0f68000) libatspi.so.0 => /usr/lib/x86_64-linux-gnu/libatspi.so.0 (0x00007f6db0d36000) libdatrie.so.1 => /usr/lib/x86_64-linux-gnu/libdatrie.so.1 (0x00007f6db0b2e000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f6db092a000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f6db0725000) libgpg-error.so.0 => /usr/lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007f6db051f000) libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f6db0518000) Not sure how this would help, it seems pretty normal for your average typical bloated GUI app... T
I was just wondering whether there was anything missing, looks fine though as I'm sure you can see too :) . Might be worth a bug report. Perhaps there are some log files to take a look at.
Sep 05 2013
prev sibling parent "PauloPinto" <pjmlp progtools.org> writes:
On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
 On Thu, Aug 22, 2013 at 10:10:36PM +0200, Ramon wrote:
 [...]
 Probably making myself new enemies I dare to say that gui, 
 colourful
 and generally graphics is the area of lowest quality code.
All areas are bad, given the way software projects are managed. The consulting projects I work on, are for Fortune 500 companies, always with at least three development sites and some extent of off-shoring work. GUI, embedded, server, database, it doesn't matter. All code is crap given the amount of time, money and developer quality assigned to the projects. Usually the top developers in the teams try to save the code, but there is only so much one can do, when the ratio between both classes of developers so big is as a way to make the projects profitable. So the few heroes that at the beginning of each project try to fix the situation, eventually give around the middle of the project. The customers don't care as long as the software works as intended.
 [...]
 LOL... totally sums up my sentiments w.r.t. GUI-dependent apps. 
 :)

 I saw through this façade decades ago when Windows 3.1 first 
 came out,
 and I've hated GUI-based OSes ever since. I stuck to DOS as 
 long as I
 could through win95 and win98, and then I learned about Linux 
 and I
 jumped ship and never looked back. But X11 isn't that much 
 better...
 there are some pretty bloated X11 apps that crash twice a day, 
 too.
Funny, I have a different experience. Before replacing my ZX Spectrum with a PC, I already knew Amiga and Atari ST systems. And IDEs on those environments as well. So I always favored GUI environments over CLI. For me, personally, the CLI is good when doing system administration, or programming related tasks that can benefit from the usual set of tricks with commands and pipes. For everything else nothing like keyboard+mouse and a nice GUI environment. Personal opinion, to each its own. -- Paulo
Aug 22 2013
prev sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Thursday, 22 August 2013 at 20:10:37 UTC, Ramon wrote:
[...]
 Probably making myself new enemies I dare to say that gui, 
 colourful and generally graphics is the area of lowest quality 
 code. Simplyfying it somewhat and being blunt I'd state: 
 Chances are that your server will hum along years without 
 problems. If anything with a gui runs some hours without 
 crashing and/or filling up stderr, you can consider yourself 
 lucky.
 Not meaning to point fingers. But just these days I happened to 
 fall over a new gui project. Guess what? They had a colourful 
 presentation with lots of vanity, pipe dreams and, of course, 
 colours. That matches quite well what I have experienced. 
 Gui/graphical/colour stuff is probably the only area where 
 people seriously do design with powerpoint. I guess that's how 
 they tick.

 You can as well look at different developers. Guys developing 
 for an embedded system are often hesitating to even use a RTOS. 
 Your average non-gui, say server developer will use some 
 libraries but he will ponder their advantages, size, 
 dependencies and quality. Now enter the graphic world. John, 
 creating some app, say, to collect, show, and sort photos will 
 happily and very generously user whatever library sounds 
 remotely usefull and doesn't run away fast enough.

 Results? An app on a microcontroller that, say, takes care of 
 building management in some 10K. A server in some 100K. And the 
 funny photo app with 12MB plus another 70MB 
 libraries/dependencies. The embedded sytem will run forever and 
 be forgotten, the server will be rebooted ever other year and 
 the photo app will crash twice a day.
Well, I'm a friend of GUIs simply because with a GUI everyone can use computers not just tech-savvies, and that's what computers are there for, aren't they? The problem are not the GUIs, I think, but the mentality. MVC hardly ever works in real life due to constraints in framework design or deadlines. Cocoa if used sensibly is great. But most of the time you don't have proper MVC because GUI frameworks "seduce" people to do sh*it like bool saveText() { text = textArea.getText(); // View!!! You're mad! savek(text); return Arghhhh!; } instead of bool saveText() { text = model.getText(); // Model! Good man youself! save(text); return fine; } It's the "bolt-on" mentality that ruins things, which is partly due to deadlines. As one guy once said to me (after I had complained about a quick and dirty implementation) "We have the choice between doing it right and doing it right NOW!" Ain't no more to say. Also, sadly enough, technologies that were never meant to be for GUIs are being used for GUI design. PHP and JS (shudder!!!!) Much as I try to keep up a proper MVC pattern, it's useless. Things will always be dirty, disgusting, work-arond-y, unsafe, buggy, you name it. And you just stop giving a sh*t. Ain't no use. And last but not least, a programmer can work for hours on end implementing a clever algorithm, using unit tests, component programming etc etc. Nobody will ever notice. If users see a button that when they press it the screen says "Hello User!", they are forever happy. What goes on under the hood is "boys and their toys".
Aug 23 2013
parent reply "Ramon" <spam thanks.no> writes:
On Friday, 23 August 2013 at 10:34:08 UTC, Chris wrote:
 On Thursday, 22 August 2013 at 20:10:37 UTC, Ramon wrote:
 [...]
 Probably making myself new enemies I dare to say that gui, 
 colourful and generally graphics is the area of lowest quality 
 code. Simplyfying it somewhat and being blunt I'd state: 
 Chances are that your server will hum along years without 
 problems. If anything with a gui runs some hours without 
 crashing and/or filling up stderr, you can consider yourself 
 lucky.
 ...
Well, I'm a friend of GUIs simply because with a GUI everyone can use computers not just tech-savvies, and that's what computers are there for, aren't they? The problem are not the GUIs, I think, but the mentality. MVC hardly ever works in real life due to constraints in framework design or deadlines. Cocoa if used sensibly is great. But most of the time you don't have proper MVC because GUI frameworks "seduce" people to do sh*it like
Oh, I'm not at all anti-GUI "politically". I don't like anything graphical (incl. X, Photoshop and pretty everything with pixels and colours) for *pragmatic* reasons. Actually I *do* see the advantages of a GUI and value them. And I detest them for being sloppily designed along a set of criteria DTP guys or artist might consider great but missing simple, everyday needs. I detest them for usually being implemented in an unsound way and generously wasting Megabytes and resources and ... (*pushing brake*) Just look at a modern browser. It happily eats up around 200 - 300 MB Ram and a good part of the CPU's cycles. And that for 1 person. Now look at a server. It walks and works and dances with 1% to 10% of memory, a fraction of CPU cycles and serves hundreds of users. Look at a smartphone ...
 It's the "bolt-on" mentality that ruins things, which is partly 
 due to deadlines. As one guy once said to me (after I had 
 complained about a quick and dirty implementation) "We have the 
 choice between doing it right and doing it right NOW!" Ain't no 
 more to say.
Right you are. But then, this is also true for anyone else. Pretty every software (in a company) is created with tight deadlines and "design", priorities, and features dictated by Marketing and product management (read: marketing/sales). It's a sad state of affairs but quite probably the most efficient way to very considerably increase product and code quality within a very shiort time is to shot some marketing people.(No, Obelix, you may not kill some graphic artists "while we are at it")
 Also, sadly enough, technologies that were never meant to be 
 for GUIs are being used for GUI design. PHP and JS 
 (shudder!!!!) Much as I try to keep up a proper MVC pattern, 
 it's useless. Things will always be dirty, disgusting, 
 work-arond-y, unsafe, buggy, you name it. And you just stop 
 giving a sh*t. Ain't no use.
Hey, didn't they tell you? PHP and JS are freedom technologies, enabling everyone to .... - well, now they actually *do* it and we know what the Bible was about speaking of hell and unbearable pain. Where is Obelix? His attitude ("may I kill some?") might come in handy ...
 And last but not least, a programmer can work for hours on end 
 implementing a clever algorithm, using unit tests, component 
 programming etc etc. Nobody will ever notice. If users see a 
 button that when they press it the screen says "Hello User!", 
 they are forever happy. What goes on under the hood is "boys 
 and their toys".
Nope. Unless, of course, Joe and Mary "I'll gladly click on anything and btw. Mr. Bob is cool" are your reference. Listen: Your reference is quality, realiability, even elegance (an excellent indicator), maybe performance and yourself knowing you did it well. Don't give Joe and Mary any power they wouldn't know how to use anyway. And btw: Probably Joe and Mary won't know about or even understand your work. But they *will* notice that your stuff works reliably and well.
Aug 23 2013
parent reply "Chris" <wendlec tcd.ie> writes:
On Friday, 23 August 2013 at 14:14:47 UTC, Ramon wrote:
 Listen: Your reference is quality, realiability, even elegance 
 (an excellent indicator), maybe performance and yourself 
 knowing you did it well. Don't give Joe and Mary any power they 
 wouldn't know how to use anyway.
 And btw: Probably Joe and Mary won't know about or even 
 understand your work. But they *will* notice that your stuff 
 works reliably and well.
Yes, of course. But shiney little buttons impress people and often the GUI logic is neglected due to a "Click, it works! Let's move on to something else" mentality. A lot of apps out there don't have a proper MVC architecture (look wxPython or Tkinter apps). It's just so easy to be neglegent.
Aug 23 2013
parent "Ramon" <spam thanks.no> writes:
On Friday, 23 August 2013 at 14:30:06 UTC, Chris wrote:
 On Friday, 23 August 2013 at 14:14:47 UTC, Ramon wrote:
 Listen: Your reference is quality, realiability, even elegance 
 (an excellent indicator), maybe performance and yourself 
 knowing you did it well. Don't give Joe and Mary any power 
 they wouldn't know how to use anyway.
 And btw: Probably Joe and Mary won't know about or even 
 understand your work. But they *will* notice that your stuff 
 works reliably and well.
Yes, of course. But shiney little buttons impress people and often the GUI logic is neglected due to a "Click, it works! Let's move on to something else" mentality. A lot of apps out there don't have a proper MVC architecture (look wxPython or Tkinter apps). It's just so easy to be neglegent.
Absolutely, Chris, I get your point and I do not disagree. But, to be fair: We *all* are in some way or another guilty of that sin. Do we, for instance, all really see, value and appreciate the math behind behind many good things? Or do we, let's be honest, usually just say "Wow, D has cool dynamic arrays, that makes my life easier or Yowza, TLS 1.2 offers considerable security"? Frankly, most and that includes programmers, do not even make the effort to properly learn and understand the basics so as to, e.g. user the proper chain mode. I have an image that helps me. I see my "other" grandpa (actually a neighbour) having worked with wood, looking at his workpiece, refining it again a little until finally he looks at it and is content and everythings just fits nicely.
Aug 23 2013
prev sibling parent "qznc" <qznc web.de> writes:
On Thursday, 22 August 2013 at 19:28:42 UTC, Nick Sabalausky 
wrote:
 On Wed, 21 Aug 2013 18:50:35 +0200
 "Ramon" <spam thanks.no> wrote:
 
 I am *not* against keeping an eye on performance, by no means. 
 Looking at Moore's law, however, and at the kind of computing 
 power available nowadays even in smartphones, not to talk 
 about 8 and 12 core PCs, I feel that the importance of 
 performance is way overestimated (possibly following a 
 formertimes justified tradition).
 
Even if we assume Moore's law is as alive and well as ever, a related note is that software tends to expand to fill the available computational power. When I can get slowdown in a text-entry box on a 64-bit multi-core, I know that hardware and Moore's law, practically speaking, have very little effect on real performance. At this point, it's code that affects performance far more than anything else. When we hail the great performance of modern web-as-a-platform by the fact that it allows an i7 or some such to run Quake as well as a Pentium 1 or 2 did, then we know Moore's law effectively counts for squat - performance is no longer about hardware, it's about not writing inefficient software. Now I'm certainly not saying that we should try to wring every last drop of performance out of every place where it doesn't even matter (like C++ tends to do). But software developers' belief in Moore's law has caused many of them to inadvertently cancel out, or even reverse, the hardware speedups with code inefficiencies (which are *easily* compoundable, and can and *do* exceed the 3x slowdown you claimed in another post was unrealistic) - and, as JS-heavy web apps prove, they haven't even gotten considerably more reliable as a result (Not that JS is a good example of a reliability-oriented language - but a lot of people certainly seem to think it is).
Moore's law is fine. The problem is power nowadays. Either the devices is mobile, which means we should try to save battery, or performance is limited by cooling. MOre detail: http://beza1e1.tuxen.de/articles/power_wall.html
Aug 23 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/21/2013 9:50 AM, Ramon wrote:
 I am *not* against keeping an eye on performance, by no means. Looking at
 Moore's law, however, and at the kind of computing power available nowadays
even
 in smartphones, not to talk about 8 and 12 core PCs, I feel that the importance
 of performance is way overestimated (possibly following a formertimes justified
 tradition).
While a 5% performance boost is not relevant for consumer apps, it can make an enormous difference for server side apps. For example, if you've got a $100m server farm, 5% means you save $5m, and server farms can be much, much bigger than that.
Aug 25 2013
next sibling parent reply "Ramon" <spam thanks.no> writes:
On Sunday, 25 August 2013 at 22:00:23 UTC, Walter Bright wrote:
 On 8/21/2013 9:50 AM, Ramon wrote:
 I am *not* against keeping an eye on performance, by no means. 
 Looking at
 Moore's law, however, and at the kind of computing power 
 available nowadays even
 in smartphones, not to talk about 8 and 12 core PCs, I feel 
 that the importance
 of performance is way overestimated (possibly following a 
 formertimes justified
 tradition).
While a 5% performance boost is not relevant for consumer apps, it can make an enormous difference for server side apps. For example, if you've got a $100m server farm, 5% means you save $5m, and server farms can be much, much bigger than that.
You are, of course, perfectly right and my professional background would testify you to be correct. But I didn't argue "performance is evil" - my point is "performance vs. realiability" and that it may quite well be a problem to favour performance too much. performance is desirable, no doubt. But reliability is a conditio sine qua non in some environments. To rephrase it: Thank you, Walter Bright, for giving us not only a performant language but one that also offers some very welcome mechanism to support reliability/safety.
Aug 25 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/25/2013 3:13 PM, Ramon wrote:
 You are, of course, perfectly right and my professional background would
testify
 you to be correct.
It's also clear to me that unless D achieves performance parity with C++, D is not going to be considered for a lot of applications. The good news is that I believe that D is technically capable of beating C++ on performance.
Aug 25 2013
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 The good news is that I believe that D is technically capable 
 of beating C++ on performance.
Some suggestions for D to increase its performance: - The frequency of heap allocations (like for arrays and small class instances) of idiomatic D programs should decrease; - The support for simple SIMD-based programming should improve. In the last weeks I have written here some posts on this topic; - It should allow simple and almost-standard means to use GPUs; Bye, bearophile
Aug 25 2013
prev sibling parent reply "Ramon" <spam thanks.no> writes:
On Sunday, 25 August 2013 at 22:27:30 UTC, Walter Bright wrote:
 It's also clear to me that unless D achieves performance parity 
 with C++, D is not going to be considered for a lot of 
 applications.

 The good news is that I believe that D is technically capable 
 of beating C++ on performance.
That is probably true for a large part of the existing and potential clientele. But while performance *is* important to me, my concern happens to not be performance to the max but rather the reliability aspects. Gladly, D delivers - and delivers quite well - in that regard, too. As for performance, maybe I'm plain old-school, i.e. falling back to asm (or C as a cross platform "asm") for those few really critical sections. From what I see around here, it seems that D still has quite some minor quirks. With all respect due (and well deserved) I consider it more important to get D really stable and well rounded. Actually, I think, D can afford some time to beat C++ in performance because thanks to it's asm capabilities, it's build in coverage stats and some other goodies, there always *is* some solution for performance. But then, maybe D's beauty in part lies in the fact that it offers a lot regarding safety/reliabilty - and - very nice performance, too ;)
Aug 25 2013
parent reply "Zach the Mystic" <reachzach gggggmail.com> writes:
On Sunday, 25 August 2013 at 23:26:19 UTC, Ramon wrote:
 But then, maybe D's beauty in part lies in the fact that it 
 offers a lot regarding safety/reliabilty - and - very nice 
 performance, too ;)
One of the theories as to why there are no bears to be found on the African continent is that they are omnivores - i.e. generalists - which in a hugely competitive environment such as Africa, there is no niche in which they will not be beat out by a more specifically adapted animal. My understanding of D is that is like a bear, trying to be good at everything. (Maybe that's why bearophile likes it so much!) But the environment for programming is sufficiently competitive that a language which is merely good at everything without being the best at something could be beaten out of the race simply by not having a niche. Therefore I see an emphasis on one thing to be a strategic advantage even if one's ultimate goal is to build something which is actually good at everything. It certainly seems to turn a lot of heads when D rivals the fastest languages in a performance comparison. Having caught their attention, D can introduce its other advantages. The two which seem most prominent to me are compile time (often 10% of C++'s) and overall expressiveness, but it seems like almost nothing has been completely ignored. I'm more or less a fanboy, so I'm sort of on-board for better or worse. Even so, I sometimes feel like this community is building some kind of Cyberdyne Systems Terminator in their garage or something.
Aug 26 2013
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 26 August 2013 at 07:32:53 UTC, Zach the Mystic wrote:
 On Sunday, 25 August 2013 at 23:26:19 UTC, Ramon wrote:
 But then, maybe D's beauty in part lies in the fact that it 
 offers a lot regarding safety/reliabilty - and - very nice 
 performance, too ;)
One of the theories as to why there are no bears to be found on the African continent is that they are omnivores - i.e. generalists - which in a hugely competitive environment such as Africa, there is no niche in which they will not be beat out by a more specifically adapted animal. My understanding of D is that is like a bear, trying to be good at everything. (Maybe that's why bearophile likes it so much!)
Human come from Africa. You'll a significant amount of monkey as well.
Aug 26 2013
next sibling parent "Ramon" <spam thanks.no> writes:
On Monday, 26 August 2013 at 08:01:51 UTC, deadalnix wrote:
 On Monday, 26 August 2013 at 07:32:53 UTC, Zach the Mystic 
 wrote:
 On Sunday, 25 August 2013 at 23:26:19 UTC, Ramon wrote:
 But then, maybe D's beauty in part lies in the fact that it 
 offers a lot regarding safety/reliabilty - and - very nice 
 performance, too ;)
One of the theories as to why there are no bears to be found on the African continent is that they are omnivores - i.e. generalists - which in a hugely competitive environment such as Africa, there is no niche in which they will not be beat out by a more specifically adapted animal. My understanding of D is that is like a bear, trying to be good at everything. (Maybe that's why bearophile likes it so much!)
Human come from Africa. You'll a significant amount of monkey as well.
D *does* offer roughly the speed of C++ *and* does compile significantly faster *and* offers reasonable strings and dyn arrays *and* offers DbC *and* offers considerable levels of safety. I don't see where this "bear" gets beaten up by lions or other kings of an environment. Yes, it's not yet fully there and still has some quirks. But then, D is not yet a fully grown up bear with decades of a predatory experience. And I for one couldn't care less for 2% less in speed - which I'll more than make up in development time, anyway.
Aug 26 2013
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 10:01, deadalnix wrote:
 Human come from Africa. You'll a significant amount of monkey as well.
With the caveat that I'm not an evolutionary biologist, palaeontologist or other appropriate expert, I'd be surprised if generalism vs. specificity was _the_ reason for the lack of sub-Saharan bear species. If you bear in mind (ha!) the evolutionary origin of bears in north America, basic geographical obstacles are likely the major factor in the lack of proliferation beyond the north of the continent. If you dumped a large enough population of bears into the African wild, they'd probably survive and maybe even displace other native species.
Aug 26 2013
prev sibling next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 09:32, Zach the Mystic wrote:
 One of the theories as to why there are no bears to be found on the African
 continent is that they are omnivores - i.e. generalists - which in a hugely
 competitive environment such as Africa, there is no niche in which they will
not
 be beat out by a more specifically adapted animal. My understanding of D is
that
 is like a bear, trying to be good at everything. (Maybe that's why bearophile
 likes it so much!)
There were bears in North Africa at least, but they died out fairly recently due to human hunting and other bloodsports: https://en.wikipedia.org/wiki/Atlas_Bear
Aug 26 2013
parent reply "Zach the Mystic" <reachzach gggggmail.com> writes:
On Monday, 26 August 2013 at 08:18:02 UTC, Joseph Rushton 
Wakeling wrote:
 There were bears in North Africa at least, but they died out 
 fairly recently due to human hunting and other bloodsports:
 https://en.wikipedia.org/wiki/Atlas_Bear
I'm kind of glad I didn't know this until now. Too much information can get in the way of a good metaphor, in my opinion!
Aug 26 2013
parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/08/13 18:17, Zach the Mystic wrote:
 I'm kind of glad I didn't know this until now. Too much information can get in
 the way of a good metaphor, in my opinion!
That one about the boiling frog is a bit dodgy as well ... :-)
Aug 30 2013
prev sibling next sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Monday, 26 August 2013 at 07:32:53 UTC, Zach the Mystic wrote:
 One of the theories as to why there are no bears to be found on 
 the African continent is that they are omnivores - i.e. 
 generalists - which in a hugely competitive environment such as 
 Africa, there is no niche in which they will not be beat out by 
 a more specifically adapted animal. My understanding of D is 
 that is like a bear, trying to be good at everything. (Maybe 
 that's why bearophile likes it so much!)

 But the environment for programming is sufficiently competitive 
 that a language which is merely good at everything without 
 being the best at something could be beaten out of the race 
 simply by not having a niche. Therefore I see an emphasis on 
 one thing to be a strategic advantage even if one's ultimate 
 goal is to build something which is actually good at everything.

 It certainly seems to turn a lot of heads when D rivals the 
 fastest languages in a performance comparison. Having caught 
 their attention, D can introduce its other advantages. The two 
 which seem most prominent to me are compile time (often 10% of 
 C++'s) and overall expressiveness, but it seems like almost 
 nothing has been completely ignored.

 I'm more or less a fanboy, so I'm sort of on-board for better 
 or worse. Even so, I sometimes feel like this community is 
 building some kind of Cyberdyne Systems Terminator in their 
 garage or something.
I don't agree. I first used D exactly because it is an "all-rounder". For me built-in UTF support was as important a factor as native machine code (performance). The reasons why people would perfer C++ to D are probably habit and convenience. If you've used C++ for years why should you bother to learn D? After all, C++ is well established, well-documented, has loads of libraries, will get you a job more eaily etc. Language features and performance are sometimes over-estimated when it comes to analyzing why a language succeeded. There's convenience, marketing (propaganda) etc etc. Also I don't think that performance alone decides whether a language becomes popular or not. If it were soley down to performance we wouldn't have Java or Python or even Objective-C (which used to be criticized for being too slow). Ease of use, a clear and consistent structure and "write once run everywhere" are very important too. Especially now that developers have to face so many different platforms (Linux, Mac, Windows, Android, iOS) everythnig goes into the direction of "write once ..." That's one of the reasons why Android took off, I think, because developers said "Great, maybe this will put an end to the mobile platform jungle. We'll support Android, less headaches for us!". D has what it takes to make it. I don't think the language itself is the problem. And of course, you will always hear arguments like "But C++ is 1% faster" from people who want to hold on to what they have spent years learning. It's completely understandable, it's like the song "There's a whole in my bucket" (http://en.wikipedia.org/wiki/There%27s_a_Hole_in_My_Bucket). Any excuse.
Aug 26 2013
parent reply "Zach the Mystic" <reachzach gggggmail.com> writes:
On Monday, 26 August 2013 at 09:29:18 UTC, Chris wrote:
 I don't agree. I first used D exactly because it is an 
 "all-rounder". For me built-in UTF support was as important a 
 factor as native machine code (performance). The reasons why 
 people would perfer C++ to D are probably habit and 
 convenience. If you've used C++ for years why should you bother 
 to learn D? After all, C++ is well established, 
 well-documented, has loads of libraries, will get you a job 
 more eaily etc. Language features and performance are sometimes 
 over-estimated when it comes to analyzing why a language 
 succeeded. There's convenience, marketing (propaganda) etc etc.

 Also I don't think that performance alone decides whether a 
 language becomes popular or not. If it were soley down to 
 performance we wouldn't have Java or Python or even Objective-C 
 (which used to be criticized for being too slow). Ease of use, 
 a clear and consistent structure and "write once run 
 everywhere" are very important too. Especially now that 
 developers have to face so many different platforms (Linux, 
 Mac, Windows, Android, iOS) everythnig goes into the direction 
 of "write once ..." That's one of the reasons why Android took 
 off, I think, because developers said "Great, maybe this will 
 put an end to the mobile platform jungle. We'll support 
 Android, less headaches for us!".
I'm trying to analyze the problem from a strategic point of view. For this, I like to invoke the Serenity Prayer (please forgive any distasteful theology): "God, grant me the serenity to accept the things I cannot change, The courage to change the things I can, And wisdom to know the difference." So yeah, you can't control people's choices. The question is, of the things which *can* be done, is it better to focus on well-roundedness, or to press an advantage where one is already ahead of the pack? Which will lead to greater adoption of the language? One thing about performance in particular is that it's easy to measure and easy for the naive person to understand what it is. So there is perhaps a risk of being seduced into sacrificing other things which are more subtle, but equally important, in favor of winning at performance. Yet I think the key point is that that's not going to happen here. I personally have too much respect for the engineers working on this thing to think they would be that short-sighted. But if it's *possible* to grab a performance trophy while still keeping the other flocks well-fed, it seems like a clear strategic win.
Aug 26 2013
parent "Chris" <wendlec tcd.ie> writes:
On Monday, 26 August 2013 at 18:33:55 UTC, Zach the Mystic wrote:
 On Monday, 26 August 2013 at 09:29:18 UTC, Chris wrote:
 I don't agree. I first used D exactly because it is an 
 "all-rounder". For me built-in UTF support was as important a 
 factor as native machine code (performance). The reasons why 
 people would perfer C++ to D are probably habit and 
 convenience. If you've used C++ for years why should you 
 bother to learn D? After all, C++ is well established, 
 well-documented, has loads of libraries, will get you a job 
 more eaily etc. Language features and performance are 
 sometimes over-estimated when it comes to analyzing why a 
 language succeeded. There's convenience, marketing 
 (propaganda) etc etc.

 Also I don't think that performance alone decides whether a 
 language becomes popular or not. If it were soley down to 
 performance we wouldn't have Java or Python or even 
 Objective-C (which used to be criticized for being too slow). 
 Ease of use, a clear and consistent structure and "write once 
 run everywhere" are very important too. Especially now that 
 developers have to face so many different platforms (Linux, 
 Mac, Windows, Android, iOS) everythnig goes into the direction 
 of "write once ..." That's one of the reasons why Android took 
 off, I think, because developers said "Great, maybe this will 
 put an end to the mobile platform jungle. We'll support 
 Android, less headaches for us!".
I'm trying to analyze the problem from a strategic point of view. For this, I like to invoke the Serenity Prayer (please forgive any distasteful theology): "God, grant me the serenity to accept the things I cannot change, The courage to change the things I can, And wisdom to know the difference." So yeah, you can't control people's choices. The question is, of the things which *can* be done, is it better to focus on well-roundedness, or to press an advantage where one is already ahead of the pack? Which will lead to greater adoption of the language? One thing about performance in particular is that it's easy to measure and easy for the naive person to understand what it is. So there is perhaps a risk of being seduced into sacrificing other things which are more subtle, but equally important, in favor of winning at performance. Yet I think the key point is that that's not going to happen here. I personally have too much respect for the engineers working on this thing to think they would be that short-sighted. But if it's *possible* to grab a performance trophy while still keeping the other flocks well-fed, it seems like a clear strategic win.
Yes and no. I've actually given up pointing out language features or even performance to people. Believe it or not, but usually the first questions are things like "Does it have libraries? If it doesn't have a sound library I will not use it ..." And if someone is a die-hard D-hater s/he will always find something like "D doesn't support multi-sync-runtime-polymorphy on reversed soon!" I think being an all-rounder is a good approach, you can use it for small script-like projects and big projects with unit tests, component programming etc. So as people ask about features, you just keep ticking the boxes. But if you start to point out one feature in particular, you're going down the slippery road of bit-by-bit language comparison, which will lead you nowhere. I think it would help a lot if we had a "Made with D" list, especially if there are some killer apps or games and the like.
Aug 27 2013
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Zach the Mystic:

 One of the theories as to why there are no bears to be found on 
 the African continent is that they are omnivores - i.e. 
 generalists - which in a hugely competitive environment such as 
 Africa, there is no niche in which they will not be beat out by 
 a more specifically adapted animal.
I presume the actual causes of the lack of bears in Africa to be more historic and more complex. Generalists are not less fit than specialists, their relative fitness changes as conditions change. And in a place as large as Africa nearly everything happens :-) We could use Kickstarter to fund the introduction of a population of three hundreds sloth bears in Kenya ;-)
 (Maybe that's why bearophile likes it so much!)
I like their omnivorous nature, and indeed my interests are somewhat "omnivore". There is also a cute story: http://cunycomposers.wikispaces.com/file/view/Bisson,+Terry+--+Bears+Discover+Fire.pdf Bye, bearophile
Aug 26 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/25/13 3:00 PM, Walter Bright wrote:
 On 8/21/2013 9:50 AM, Ramon wrote:
 I am *not* against keeping an eye on performance, by no means. Looking at
 Moore's law, however, and at the kind of computing power available
 nowadays even
 in smartphones, not to talk about 8 and 12 core PCs, I feel that the
 importance
 of performance is way overestimated (possibly following a formertimes
 justified
 tradition).
While a 5% performance boost is not relevant for consumer apps, it can make an enormous difference for server side apps. For example, if you've got a $100m server farm, 5% means you save $5m, and server farms can be much, much bigger than that.
More than server acquisition costs, it's the electricity. Andrei
Aug 25 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-08-19 23:17, H. S. Teoh wrote:

 Yeah, in this day and age, not having native Unicode support is simply
 unacceptable. The world has simply moved past the era of ASCII (and the
 associated gratuitously incompatible locale encodings). Neither is the
 lack of built-in strings (*cough*C++*cough*).
Oh, what I wish that was true. We're still struggling with encoding problems at work due to browsers and third party services and tools not being able to handle Unicode. -- /Jacob Carlborg
Aug 19 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 08:58:54AM +0200, Jacob Carlborg wrote:
 On 2013-08-19 23:17, H. S. Teoh wrote:
 
Yeah, in this day and age, not having native Unicode support is
simply unacceptable. The world has simply moved past the era of ASCII
(and the associated gratuitously incompatible locale encodings).
Neither is the lack of built-in strings (*cough*C++*cough*).
Oh, what I wish that was true. We're still struggling with encoding problems at work due to browsers and third party services and tools not being able to handle Unicode.
[...] Well, I was referring to languages and systems invented today. Obviously there is still a large amount of legacy code that can't handle Unicode yet, but any new language or new system invented today has no excuse to not support Unicode. T -- Those who've learned LaTeX swear by it. Those who are learning LaTeX swear at it. -- Pete Bleackley
Aug 20 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 7:30 AM, H. S. Teoh wrote:
 Well, I was referring to languages and systems invented today. Obviously
 there is still a large amount of legacy code that can't handle Unicode
 yet, but any new language or new system invented today has no excuse to
 not support Unicode.
Even back in 1999 when I started with D, it was obvious that it had to be Unicode front to back.
Aug 20 2013
prev sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Monday, 19 August 2013 at 21:19:05 UTC, H. S. Teoh wrote:
 scope guards. If I were granted a wish for how to lessen the 
 pain of
 coding in C, one of the first things I'd ask for is scope.
A little OT at this point, but C Survival Kit might have you sufficiently covered: https://github.com/chadjoan/C-Survival-Kit/blob/master/survival_kit/feature_emulation/scope.h (Been considering pulling this in at work, too.) -Wyatt
Aug 20 2013
parent reply "Ramon" <spam thanks.no> writes:
Yes and no.
While UTF-8 almost always is the most memory efficient 
representation of anything beyond ASCII it does have a property 
that can be troublesome a times, the difference between length 
and size of a string, i.e. the number of "characters" vs. the 
number of bytes used.

---

As for another issue I'm getting more and more disappointed: 
generics.

To put (my mind) bluntly, D does *not* support generics but 
rather went into the same ugly trap C++ went into, albeit D 
handles the situation way more elegantly.

Forgive me seeming harsh here but I just wrote it in the D gui 
thread: Any really great solution needs a solid philosophy and 
very profound thinking and consistency - and here D failed 
bluntly (in my minds eye).

With all due respect: Templates are an editors job, not a 
compilers.
Yes, templates offer some gadgets beyond simple replacement but 
basically they are just a comfort thingy, relieving the 
programmer from typing.

That, however, was *not* the point about generics. They are about 
implementing algorithms independent of data types (as far as 
possible).

Now, I'm looking around at mixins and interfaces in order to 
somehow makeshift some kind of a halfway reasonable generics 
mechanism. Yuck!

Well, maybe it's my fault. Maybe I was foolish to hope for 
something like Eiffel but more pragmatically useful and useable, 
more C style and way better documented. What I seem to have found 
with D is a *very nice* and *way better* and considerably more 
useful kind of C++.

Why aren't real generics there? I mean it's not that high tech or 
hard to implement (leaving aside macho bla bla like "It'd break 
the ranges system").

why not something like

generic min(T:comparable) { // works only with comparable 
types/classes
   // min magic
}

This could then at *run time* work with anything that met the 
spec for "comparable" which basically came down to anything that 
offers "equ" and "gt" (and "not").

On a sidenote: It seems we are somehow trapped in between two 
worlds, the theoreticians and the pragmatics. Walter and his 
colleagues have created an astonishingly beautiful beast coming 
from pure pragmatic engineering, while e.g. Prof. Meyer has 
created a brilliant system that just happens to be factually 
unuseable for the majority of developers (and be it only because 
hardly anyone will spend some 1.000$ to get started with Eiffel).
It's as if one side a purely as engineers while the other side 
just didn't care sh*t about their stuff being useable and useful.

My sincere apologies if I happened to offend anyone; that was 
definitely not my intention.
Aug 20 2013
next sibling parent reply "QAston" <qaston gmail.com> writes:
On Tuesday, 20 August 2013 at 16:40:21 UTC, Ramon wrote:
 Yes and no.
 While UTF-8 almost always is the most memory efficient 
 representation of anything beyond ASCII it does have a property 
 that can be troublesome a times, the difference between length 
 and size of a string, i.e. the number of "characters" vs. the 
 number of bytes used.

 ---

 As for another issue I'm getting more and more disappointed: 
 generics.

 To put (my mind) bluntly, D does *not* support generics but 
 rather went into the same ugly trap C++ went into, albeit D 
 handles the situation way more elegantly.

 Forgive me seeming harsh here but I just wrote it in the D gui 
 thread: Any really great solution needs a solid philosophy and 
 very profound thinking and consistency - and here D failed 
 bluntly (in my minds eye).

 With all due respect: Templates are an editors job, not a 
 compilers.
 Yes, templates offer some gadgets beyond simple replacement but 
 basically they are just a comfort thingy, relieving the 
 programmer from typing.

 That, however, was *not* the point about generics. They are 
 about implementing algorithms independent of data types (as far 
 as possible).

 Now, I'm looking around at mixins and interfaces in order to 
 somehow makeshift some kind of a halfway reasonable generics 
 mechanism. Yuck!

 Well, maybe it's my fault. Maybe I was foolish to hope for 
 something like Eiffel but more pragmatically useful and 
 useable, more C style and way better documented. What I seem to 
 have found with D is a *very nice* and *way better* and 
 considerably more useful kind of C++.

 Why aren't real generics there? I mean it's not that high tech 
 or hard to implement (leaving aside macho bla bla like "It'd 
 break the ranges system").

 why not something like

 generic min(T:comparable) { // works only with comparable 
 types/classes
   // min magic
 }

 This could then at *run time* work with anything that met the 
 spec for "comparable" which basically came down to anything 
 that offers "equ" and "gt" (and "not").
Interfaces offer runtime resolution: interface Comparable { } void doStuff(Comparable c) { } will work with anything that meets the specs for comparable. For compile time resolution you can do this
Aug 20 2013
next sibling parent "QAston" <qaston gmail.com> writes:
On Tuesday, 20 August 2013 at 16:49:35 UTC, QAston wrote:
 On Tuesday, 20 August 2013 at 16:40:21 UTC, Ramon wrote:
 Yes and no.
 While UTF-8 almost always is the most memory efficient 
 representation of anything beyond ASCII it does have a 
 property that can be troublesome a times, the difference 
 between length and size of a string, i.e. the number of 
 "characters" vs. the number of bytes used.

 ---

 As for another issue I'm getting more and more disappointed: 
 generics.

 To put (my mind) bluntly, D does *not* support generics but 
 rather went into the same ugly trap C++ went into, albeit D 
 handles the situation way more elegantly.

 Forgive me seeming harsh here but I just wrote it in the D gui 
 thread: Any really great solution needs a solid philosophy and 
 very profound thinking and consistency - and here D failed 
 bluntly (in my minds eye).

 With all due respect: Templates are an editors job, not a 
 compilers.
 Yes, templates offer some gadgets beyond simple replacement 
 but basically they are just a comfort thingy, relieving the 
 programmer from typing.

 That, however, was *not* the point about generics. They are 
 about implementing algorithms independent of data types (as 
 far as possible).

 Now, I'm looking around at mixins and interfaces in order to 
 somehow makeshift some kind of a halfway reasonable generics 
 mechanism. Yuck!

 Well, maybe it's my fault. Maybe I was foolish to hope for 
 something like Eiffel but more pragmatically useful and 
 useable, more C style and way better documented. What I seem 
 to have found with D is a *very nice* and *way better* and 
 considerably more useful kind of C++.

 Why aren't real generics there? I mean it's not that high tech 
 or hard to implement (leaving aside macho bla bla like "It'd 
 break the ranges system").

 why not something like

 generic min(T:comparable) { // works only with comparable 
 types/classes
  // min magic
 }

 This could then at *run time* work with anything that met the 
 spec for "comparable" which basically came down to anything 
 that offers "equ" and "gt" (and "not").
Interfaces offer runtime resolution: interface Comparable { } void doStuff(Comparable c) { } will work with anything that meets the specs for comparable. For compile time resolution you can do this
sorry, I missclicked and then unintentionally posted this unfinished by "Send" keyboard shortcut :(
Aug 20 2013
prev sibling parent "Ramon" <spam thanks.no> writes:
On Tuesday, 20 August 2013 at 16:49:35 UTC, QAston wrote:
 Interfaces offer runtime resolution:

 interface Comparable
 {

 }
 void doStuff(Comparable c)
 {
 }
 will work with anything that meets the specs for comparable.

 For compile time resolution you can do this
Thanks QAston for your constructive and helpful suggestion. Actually this is more or less the approach that I'm following (researching for the time being). Actually I assume that Prof. Meyer was at that point at some time, too. He just happened, so it seems, to have figured out a way to do polymorphism right and painfree. Pragmatically this (your suggestion) pretty closely matches how one approaches it in Eiffel (but don't tell Prof. Meyer! He'll probably vehemently elaborate on theory *g). Whatever, that's basically what I wanted. Although I have to lament somewhat that D's doc (as far as I know) doesn't point that out clearly. Thanks.
Aug 20 2013
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Tuesday, 20 August 2013 at 16:40:21 UTC, Ramon wrote:
 ...
You are completely right - templates are not generics. They are, ironically, much more generic and are to solve quite a simple problem - "copy-paste", in variety of forms. If you think it is better to have an IDE to generate boilerplate instead of compiler, I can assure you, finding supporters in this community will be quite hard. Insisting on the idea that implementing generic data types using run-time polymorphism costs is the True Way won't help either. Honestly, I will never use any language that implies polymorphic design for stuff like container. Not of my free will at least. And every time I remember that boxing stuff in Java I have nightmares. You want polymorphic approach - you have tools to implement it. Interfaces, classes, suit yourself. But, please, don't try to fix what is not broken.
Aug 20 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 20 August 2013 at 16:40:21 UTC, Ramon wrote:
 Yes and no.
 While UTF-8 almost always is the most memory efficient 
 representation of anything beyond ASCII it does have a property 
 that can be troublesome a times, the difference between length 
 and size of a string, i.e. the number of "characters" vs. the 
 number of bytes used.
If trully you are using UTF-16 (which is what D uses), then no. UTF16 is *also* a variable width encoding. If you need random access, you should use UTF-32 (dstring). *THAT* uses a lot memory, and should only be used as an "operating" format, before storing back to UTF-8/16. "non-variable" UTF-16 is called UCS-2 (I think). In any case, it's not what D uses. UCS-2 being a subset of UTF-16, you can always use wstrings, and "assume" in is UCS-2, but: * Most algorithms are UTF-16 aware, so *will* decode and walk your UCS-2 stream the slow way. * Nothing will prevent you from accidently inserting codepoints from outside UCS-2 valid plane. I don't recommend doint that. Instead, you can find in std.encoding the UCSChar and UCSString data types. I haven't used these much, but it's what you should use if you are planning to store your strings in a random access wide representation. But we digress from the original point. I'm glad you are enjoying your time with D :) One of the things I love about D is how the *language* makes stupid constructs outright illegal (for example "for( ... );" notice that semi-colon? yeah...) I work full-time using C++, and about once a week, I track down a bug, and when I find it often turns out to be something stupid that D would not have allowed.
Aug 20 2013
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/20/2013 06:40 PM, Ramon wrote:
 Yes and no.
 While UTF-8 almost always is the most memory efficient representation of
 anything beyond ASCII it does have a property that can be troublesome a
 times, the difference between length and size of a string, i.e. the
 number of "characters" vs. the number of bytes used.

 ---

 As for another issue I'm getting more and more disappointed: generics.

 To put (my mind) bluntly, D does *not* support generics but rather went
 into the same ugly trap C++ went into, albeit D handles the situation
 way more elegantly.
 ...
Yup. It is a limited, but quite well-integrated macro system.
 Forgive me seeming harsh here but I just wrote it in the D gui thread:
 Any really great solution needs a solid philosophy and very profound
 thinking and consistency - and here D failed bluntly (in my minds eye).
 ...
Agreed. Well, it is fixable. The main issue is that templates manage to hide the problem well enough. Also, why stop there? The lambda cube has more dimensions, and those cannot be approximated that well by templates. :)
 With all due respect: Templates are an editors job, not a compilers.
Here I'd tend to disagree. Code duplication is the compiler's job.
 Yes, templates offer some gadgets beyond simple replacement but
 basically they are just a comfort thingy, relieving the programmer from
 typing.
 ...
Well, but by a similar line of though I might claim that a polymorphic type system is just a comfort thingy, relieving the programmer from manually boxing and unboxing values and performing type equality/constraint checking in his head.
 That, however, was *not* the point about generics. They are about
 implementing algorithms independent of data types (as far as possible).
 ...
This is one use case for templates and they allow more performance optimizations since they can actually treat some types specially.
 Now, I'm looking around at mixins and interfaces in order to somehow
 makeshift some kind of a halfway reasonable generics mechanism. Yuck!

 Well, maybe it's my fault. Maybe I was foolish to hope for something
 like Eiffel but more pragmatically useful and useable, more C style and
 way better documented. What I seem to have found with D is a *very nice*
 and *way better* and considerably more useful kind of C++.

 Why aren't real generics there? I mean it's not that high tech  or hard
 to implement (leaving aside macho bla bla like "It'd break the ranges
 system").
 ...
It wouldn't break the ranges system. The official justification for lack of a more expressive type system is language complexity.
 why not something like

 generic min(T:comparable) { // works only with comparable types/classes
    // min magic
 }
 ...
Implicit parameters would be a more general way to deal with the 'comparable' constraint, but it's not entirely trivial to dream up a pretty scheme fitting into D. Also, how do you implement the type comparison constraint? Requiring the whole interface to be implemented within an struct/class-type's scope implies that somewhat ugly wrapper types need to be created. Also, you really want shortcut syntax for functions, structs etc. that does not clash with template syntax, so probably you'd use a separate kind of brackets: T min[T:comparable](T a, T b) { return a<b?a:b; }
 This could then at *run time* work with anything that met the spec for
 "comparable" which basically came down to anything that offers "equ" and
 "gt" (and "not").
 ...
The D term is opCmp.
 On a sidenote: It seems we are somehow trapped in between two worlds,
 the theoreticians and the pragmatics. Walter and his colleagues have
 created an astonishingly beautiful beast coming from pure pragmatic
 engineering, while e.g. Prof. Meyer has created a brilliant system that
 just happens to be factually unuseable for the majority of developers
 (and be it only because hardly anyone will spend some 1.000$ to get
 started with Eiffel).
 ...
One thing that should be noted about Eiffel is that its type system is unsound by design.
 It's as if one side a purely as engineers while the other side just
 didn't care sh*t about their stuff being useable and useful.

 My sincere apologies if I happened to offend anyone; that was definitely
 not my intention.
You are usually free to express your opinions on here without anyone taking issue if you justify your statements and/or are open to discussion.
Aug 20 2013
parent reply "Ramon" <spam thanks.no> writes:
First, thanks all for returning (or keeping up) a constructive 
discussion.

 monarch_dodra

My error, sorry. I was talking in the context of a western view, 
ignoring China, Japan, Koreas (and probably some more Asian 
countries/languages, too). Not meaning to propagate that as 
generally sound practice, I personally happen to work in a very 
western-centric world where, say, Russian (kyrillic alphabet) is 
already considered *very* exotic. In that context, however, 
16bits are plenty enough.


 H. S. Teoh

I remember myself to stubbornly refuse Windows and staying with 
Dos or Unix (cli). Well, work forced me to make compromises and 
since FreeBSD came up (and Solaris worked on X86) I made one step 
and another ... and are (by your standards) pretty rotten 
nowadays *g

I got your point and I agree. Yes, it's a major plus for D to be 
cli useable and not requiring a (probably bloated) IDE. OTOH, I'm 
quite liberal in that and never had qualms about using an IDE 
(or, in old days, E, brief and the like); after all, a computers 
raison d'etre is to make our lifes easier and to take on a 
gazillion of boring little tasks, no?
But it's, of course, strongly desirable to have "direct access" 
on the commandline, which IMO is valid for other areas, too. 
Actually it always was oe of my reasons to outright hate Windows 
for keeping me away from its guts.


 Timon Gehr

 Here I'd tend to disagree. Code duplication is the compiler's 
 job.
I get your point but I disagree. Sure, looking at it from D's point of view (with very powerful and elaborate "duplication" facilities) you are right. I see that, however, as a (valuable) add-on. It doesn't change the fact that the technical part of code production is an editors job; just think code completion, intellisense and the like. Maybe it's just a perspective thing. I tend to feel that the editor is the interface between the system and myself. What is produced with it will then be the input to a compiler. Anyway, that's not important because thanks to D's facilities we can actually have it both ways ;) As for generics: Maybe I'm not the brightest guy around but I have suceeded in noticing that there seems to be tendency in D's community to not react warmly to critical remarks regarding generics ... Actually, I'm far away from hitting on D. Even if, suppose, its generics were lousy, there would still be lots of solid good reasons to like D and to consider it one of the best approaches wide and far. It might as well be my fault to be stubbornly fixed on "generics done right".
Aug 20 2013
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/20/2013 10:24 PM, Ramon wrote:
  Timon Gehr
...

 As for generics: Maybe I'm not the brightest guy around but I have
 suceeded in noticing that there seems to be tendency in D's community to
 not react warmly to critical remarks regarding generics ...
Why would it be relevant for arguing a point whether reactions are warm or not?
Aug 20 2013
prev sibling parent reply "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Tuesday, 20 August 2013 at 20:24:21 UTC, Ramon wrote:
  Timon Gehr

 Here I'd tend to disagree. Code duplication is the compiler's 
 job.
I get your point but I disagree.
No I disagree :) There has been several months, or more, of research going into reusable code. Languages include things like functions, modules, and even classes to help produce code which can be shared across many applications. Some languages research polymorphic data to provide generics. To suggest that a language isn't needed to handle code duplication because an IDE can duplicate it for you is absurd. I don't frown on copy-paste code because "it's the thing to do" or because it causes more typing. copy-paste is bad because your logic is now duplicated and requires twice (on a good day) the updates. To have the IDE do this still has the core problem. One could create a syntax for the IDE to expand the code before compilation allowing for a single location of logic... but now you've just invented a non-standard macro language that could have just been dealt with in the compiler.
Aug 20 2013
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/21/13, Jesse Phillips <Jesse.K.Phillips+D gmail.com> wrote:
 I don't frown on copy-paste code because "it's the thing to do"
 or because it causes more typing. copy-paste is bad because your
 logic is now duplicated and requires twice (on a good day) the
 updates.
Speaking of which someone should try and make a D de-duplication project (perhaps using Dscanner[1]), which would print out all the duplicated code segments in a D codebase. I think it would be a neat thing to have. [1] : https://github.com/Hackerpilot/Dscanner
Aug 21 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/21/13 3:53 AM, Andrej Mitrovic wrote:
 On 8/21/13, Jesse Phillips <Jesse.K.Phillips+D gmail.com> wrote:
 I don't frown on copy-paste code because "it's the thing to do"
 or because it causes more typing. copy-paste is bad because your
 logic is now duplicated and requires twice (on a good day) the
 updates.
Speaking of which someone should try and make a D de-duplication project (perhaps using Dscanner[1]), which would print out all the duplicated code segments in a D codebase. I think it would be a neat thing to have. [1] : https://github.com/Hackerpilot/Dscanner
Awesome idea. One would run the deduper over a codebase and e.g. show the top 10 longest repeated subsequences. Those could be refactored into functions etc. The deduper would be insensitive to alpha renaming, e.g. "int a = 10;" and "int b = 10;" would be identical. Andrei
Aug 21 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/21/2013 10:12 AM, Andrei Alexandrescu wrote:
 On 8/21/13 3:53 AM, Andrej Mitrovic wrote:
 Speaking of which someone should try and make a D de-duplication
 project (perhaps using Dscanner[1]), which would print out all the
 duplicated code segments in a D codebase. I think it would be a neat
 thing to have.

 [1] : https://github.com/Hackerpilot/Dscanner
Awesome idea. One would run the deduper over a codebase and e.g. show the top 10 longest repeated subsequences. Those could be refactored into functions etc. The deduper would be insensitive to alpha renaming, e.g. "int a = 10;" and "int b = 10;" would be identical.
I've often thought of writing a pass for dmd that would coalesce functions that are semantically identical (even though they may operate on different types).
Aug 21 2013
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Walter Bright wrote:
 semantically identical
This would be equivalent to finding plagiarisms and result in a semantical compression of a software base---and seems to be computational intractable unless severely restricted. -manfred
Aug 22 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/22/2013 7:52 PM, Manfred Nowak wrote:
 Walter Bright wrote:
 semantically identical
This would be equivalent to finding plagiarisms and result in a semantical compression of a software base---and seems to be computational intractable unless severely restricted.
I don't think it would be that hard. The structure of the ASTs would need to match, and the types would have to match depending on the operation - for example, a + gives the same result for signed and unsigned types, whereas < does not.
Aug 22 2013
parent Manfred Nowak <svv1999 hotmail.com> writes:
Walter Bright wrote:
 The structure of the ASTs would need to match
You may be right: http://www.cs.brown.edu/publications/jgaa/accepted/99/Eppstein99.3.3.pdf -manfred
Aug 22 2013
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/21/2013 07:12 PM, Andrei Alexandrescu wrote:
 The deduper would be insensitive to alpha renaming, e.g. "int a = 10;"
 and "int b = 10;" would be identical.
This is not alpha renaming, it is just renaming. :o) Eg. "{int a = 10; foo(a);}" and "{int b = 10; foo(b);}" would be identical.
Aug 21 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Ramon:

 not having multiple inheritance is a major minus with me. D 
 seems to compensate quite nicely by supporting interfaces.
D is also supposed to gain multiple "alias this", currently only one is supported. Bye, bearophile
Aug 19 2013
prev sibling next sibling parent Peter Williams <pwil3058 bigpond.net.au> writes:
On 20/08/13 06:18, Ramon wrote:
 Falling over the famous Ariane 5 article I looked at Eiffel. I have to
 confess that I almost feel in love. Eiffel felt just right and Prof.
 Meyers books convinced me again and again - Yesss, that's the way I'd
 like to work and develop software.
 Unfortunately, though, Prof Meyer and ISE (the Eiffel company) made some
 errors, too, and in a major way.
 For a starter that whole Eiffel world is pretty much a large beautiful
 castle ... inmidst a desert. Theoretically there are different
 compilers, factually, however, ISE's Eiffelstudio is the only one; the
 others are either brutally outdated or non-conforming or weird niche
 thingies or eternally in alpha, or a mixture of those. And Eiffelstudio
 costs north of 5.000 us$. Sure there is a GPL version but that is
 available only for GPL'ed programs.
 Next issue: Eiffels documentation is plain lousy. Yes, there are some 5
 or so books but those are either purely theoretical or very outdated or
 both. Yes there is lots of documentation online but most of it basically
 is luring sales driven "Look how easy it is with Eiffel" stuff. And
 there is a doxygen like API doc which is pretty worthless for learning
 how to practically use the language.
 Furthermore, while Eiffel comes with a lot on board there still is much
 missing; just as an example there are no SSL sockets which nowadays is a
 killer.
I found similar issues with Eiffel plus I was turned off by the OOP only factor. Programming needs to be more flexible than that. BTW using Eiffel was where I first realized that contracts would never be as useful as I hoped. They're still useful and I still use them but they're not as expressive as I'd hoped for (neither are D's). In reality, they're just highly targeted unit tests. I think my disappointment stems from the fact I used to write Z specifications and I would liked to have contracts that were the equivalent. Peter
Aug 19 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-08-19 22:18, Ramon wrote:

 An added major plus is D's bluntly straight interface to C. And a vital
 one, too, because let's face it, not being one of the major players in
 languages basically means to either not have a whole lot of major and
 important libraries or else to (usually painfully) bind them. D offers
 an excellent solution and gives me the peace of mind to not paranoically
 care about *every* important library coming with it.
You can use this tool to automatically generate bindings to C libraries: https://github.com/jacob-carlborg/dstep
 Criticism:

 OK, I'm biased and spoiled by Eiffel but not having multiple inheritance
 is a major minus with me. D seems to compensate quite nicely by
 supporting interfaces. But: I'd like more documentation on that. "Go and
 read at wikipedia" just doesn't cut it. Please, kindly, work on some
 extensive documentation on that.
You can get quite close with interfaces and templates: interface Foo { void foo (); } template FooTrait { void foo (); { writeln("foo"); } } class A : Foo { mixin FooTrait; } class B { void b () { } } class Bar : B, Foo { mixin FooTrait; } -- /Jacob Carlborg
Aug 20 2013
parent reply "Chris" <wendlec tcd.ie> writes:
On Tuesday, 20 August 2013 at 07:08:08 UTC, Jacob Carlborg wrote:
 You can use this tool to automatically generate bindings to C 
 libraries:

 https://github.com/jacob-carlborg/dstep
Great stuff, Jacob! Congratulations. One thing that is usually not mentioned in articles about D is that you don't need an IDE to develop in D. This was, if I remember it correctly, one of the design goals.
Aug 20 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 11:19:27AM +0200, Chris wrote:
 On Tuesday, 20 August 2013 at 07:08:08 UTC, Jacob Carlborg wrote:
You can use this tool to automatically generate bindings to C
libraries:

https://github.com/jacob-carlborg/dstep
Great stuff, Jacob! Congratulations. One thing that is usually not mentioned in articles about D is that you don't need an IDE to develop in D. This was, if I remember it correctly, one of the design goals.
Was it a design goal? If so, kudos to Walter. :) Because one of my criteria for a better programming language when I decided that I was fed up with C++ and needed something better, was that it must not have undue reliance on an IDE or some other external tool to be usable. Thus, Java was disqualified (too much boilerplate that can't be adequately handled without an IDE -- of course, there were other factors, but this was a big one). It must be usable with just a text editor and a compiler. D fit that criterion rather nicely. :) T -- If creativity is stifled by rigid discipline, then it is not true creativity.
Aug 20 2013
next sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Tuesday, 20 August 2013 at 14:35:19 UTC, H. S. Teoh wrote:
 On Tue, Aug 20, 2013 at 11:19:27AM +0200, Chris wrote:
 On Tuesday, 20 August 2013 at 07:08:08 UTC, Jacob Carlborg 
 wrote:
You can use this tool to automatically generate bindings to C
libraries:

https://github.com/jacob-carlborg/dstep
Great stuff, Jacob! Congratulations. One thing that is usually not mentioned in articles about D is that you don't need an IDE to develop in D. This was, if I remember it correctly, one of the design goals.
Was it a design goal? If so, kudos to Walter. :) Because one of my criteria for a better programming language when I decided that I was fed up with C++ and needed something better, was that it must not have undue reliance on an IDE or some other external tool to be usable. Thus, Java was disqualified (too much boilerplate that can't be adequately handled without an IDE -- of course, there were other factors, but this was a big one). It must be usable with just a text editor and a compiler. D fit that criterion rather nicely. :) T
I don't know if it's still on the website here somewhere. But I remember reading (2 years or so ago) that D shouldn't require a big IDE but should be manageable using a text editor and a compiler. And it is true. So far, I haven't used an IDE for my D programming.
Aug 20 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/20/2013 8:31 AM, Chris wrote:
 I don't know if it's still on the website here somewhere. But I remember
reading
 (2 years or so ago) that D shouldn't require a big IDE but should be manageable
 using a text editor and a compiler. And it is true. So far, I haven't used an
 IDE for my D programming.
The idea was when a colleague of mine said that Java IDEs were great because with "one button click" one could insert "a hundred lines of boilerplate". That struck me as the IDE making up for a severe expressive deficit in the language, and that D shouldn't have such expressive deficits. However, something like intellisense would be quite nice.
Aug 20 2013
parent "Chris" <wendlec tcd.ie> writes:
On Tuesday, 20 August 2013 at 18:48:57 UTC, Walter Bright wrote:
 On 8/20/2013 8:31 AM, Chris wrote:
 I don't know if it's still on the website here somewhere. But 
 I remember reading
 (2 years or so ago) that D shouldn't require a big IDE but 
 should be manageable
 using a text editor and a compiler. And it is true. So far, I 
 haven't used an
 IDE for my D programming.
The idea was when a colleague of mine said that Java IDEs were great because with "one button click" one could insert "a hundred lines of boilerplate". That struck me as the IDE making up for a severe expressive deficit in the language, and that D shouldn't have such expressive deficits.
And this is a big big plus. In Java you cannot just write a tiny little main-function or in fact any function just to test something. You have to write an epic first. These little things do matter.
 However, something like intellisense would be quite nice.
Yes, I agree. But with D it's a nice-to-have not a must-have.
Aug 20 2013
prev sibling parent reply "pjmp" <pjmp progtools.org> writes:
On Tuesday, 20 August 2013 at 14:35:19 UTC, H. S. Teoh wrote:
 On Tue, Aug 20, 2013 at 11:19:27AM +0200, Chris wrote:
 On Tuesday, 20 August 2013 at 07:08:08 UTC, Jacob Carlborg 
 wrote:
You can use this tool to automatically generate bindings to C
libraries:

https://github.com/jacob-carlborg/dstep
Great stuff, Jacob! Congratulations. One thing that is usually not mentioned in articles about D is that you don't need an IDE to develop in D. This was, if I remember it correctly, one of the design goals.
Was it a design goal? If so, kudos to Walter. :) Because one of my criteria for a better programming language when I decided that I was fed up with C++ and needed something better, was that it must not have undue reliance on an IDE or some other external tool to be usable. Thus, Java was disqualified (too much boilerplate that can't be adequately handled without an IDE -- of course, there were other factors, but this was a big one). It must be usable with just a text editor and a compiler. D fit that criterion rather nicely. :) T
Programming like the 70's, yo! :) -- Paulo
Aug 20 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Aug 20, 2013 at 08:57:35PM +0200, pjmp wrote:
 On Tuesday, 20 August 2013 at 14:35:19 UTC, H. S. Teoh wrote:
On Tue, Aug 20, 2013 at 11:19:27AM +0200, Chris wrote:
[...]
One thing that is usually not mentioned in articles about D is that
you don't need an IDE to develop in D. This was, if I remember it
correctly, one of the design goals.
Was it a design goal? If so, kudos to Walter. :) Because one of my criteria for a better programming language when I decided that I was fed up with C++ and needed something better, was that it must not have undue reliance on an IDE or some other external tool to be usable. Thus, Java was disqualified (too much boilerplate that can't be adequately handled without an IDE -- of course, there were other factors, but this was a big one). It must be usable with just a text editor and a compiler. D fit that criterion rather nicely. :) T
Programming like the 70's, yo! :)
[...] LOL... to be honest, my PC "desktop" is more like a glorified terminal shell than anything else, in spite of the fact that I'm running under X11. My window manager is ratpoison, which is completely keyboard-based (hence the name), maximizes all windows by default (no tiling / overlapping), and has no window decorations. I don't even use the mouse except when using the browser or selecting text for cut/paste. (And if I had my way, I'd write a keyboard-only graphical browser that didn't depend on the mouse. I'd use Elinks instead, except that viewing images in a text terminal is rather a hassle, and there *is* a place for graphics when you need to present non-textual information -- I just don't think it's necessary when I'm dealing mostly with text anyway.) I experimented with various ratpoison setups, and found that the most comfortable way was to increase my terminal font size so that it's approximately 80 columns wide (70's style ftw :-P), and however tall it is to fill the screen. I found that I'm most productive this way -- thanks to Vim's split-screen features and bash's backgrounding features, I find that I can do most of my work in a single terminal or two, and another background window for the browser. Since I don't even need to move my right hand to/from the mouse, I can get things done *fast* without needing a 6GHz CPU with 16GB of RAM -- a Pentium would suffice if I hadn't needed to work with CPU-intensive processes like povray renders or brute-force state space search algorithms. :) OTOH, I find that my productivity drops dramatically when I'm confronted with a GUI. I honestly cannot stand working on Windows because of this. *Everything* depends on the mouse and traversing endless layers of nested menus just to do something simple, and almost nothing is scriptable unless specifically designed for it (which usually suffers from many limitations in how you can use it between different applications). Give me the Unix command-line any day, thank you very much. So yes, I'm truly a relic from the 70's. ;-) T -- This sentence is false.
Aug 20 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Tuesday, 20 August 2013 at 19:41:13 UTC, H. S. Teoh wrote:
 On Tue, Aug 20, 2013 at 08:57:35PM +0200, pjmp wrote:
 On Tuesday, 20 August 2013 at 14:35:19 UTC, H. S. Teoh wrote:
On Tue, Aug 20, 2013 at 11:19:27AM +0200, Chris wrote:
[...]
One thing that is usually not mentioned in articles about D 
is that
you don't need an IDE to develop in D. This was, if I 
remember it
correctly, one of the design goals.
Was it a design goal? If so, kudos to Walter. :) Because one of my criteria for a better programming language when I decided that I was fed up with C++ and needed something better, was that it must not have undue reliance on an IDE or some other external tool to be usable. Thus, Java was disqualified (too much boilerplate that can't be adequately handled without an IDE -- of course, there were other factors, but this was a big one). It must be usable with just a text editor and a compiler. D fit that criterion rather nicely. :) T
Programming like the 70's, yo! :)
[...] LOL... to be honest, my PC "desktop" is more like a glorified terminal shell than anything else, in spite of the fact that I'm running under X11. My window manager is ratpoison, which is completely keyboard-based (hence the name), maximizes all windows by default (no tiling / overlapping), and has no window decorations. I don't even use the mouse except when using the browser or selecting text for cut/paste. (And if I had my way, I'd write a keyboard-only graphical browser that didn't depend on the mouse. I'd use Elinks instead, except that viewing images in a text terminal is rather a hassle, and there *is* a place for graphics when you need to present non-textual information -- I just don't think it's necessary when I'm dealing mostly with text anyway.) I experimented with various ratpoison setups, and found that the most comfortable way was to increase my terminal font size so that it's approximately 80 columns wide (70's style ftw :-P), and however tall it is to fill the screen. I found that I'm most productive this way -- thanks to Vim's split-screen features and bash's backgrounding features, I find that I can do most of my work in a single terminal or two, and another background window for the browser. Since I don't even need to move my right hand to/from the mouse, I can get things done *fast* without needing a 6GHz CPU with 16GB of RAM -- a Pentium would suffice if I hadn't needed to work with CPU-intensive processes like povray renders or brute-force state space search algorithms. :) OTOH, I find that my productivity drops dramatically when I'm confronted with a GUI. I honestly cannot stand working on Windows because of this. *Everything* depends on the mouse and traversing endless layers of nested menus just to do something simple, and almost nothing is scriptable unless specifically designed for it (which usually suffers from many limitations in how you can use it between different applications). Give me the Unix command-line any day, thank you very much. So yes, I'm truly a relic from the 70's. ;-) T
I think you might enjoy https://github.com/conformal/xombrero/ snapshots here: https://opensource.conformal.com/snapshots/xombrero/ although they're a month or so old and I had to edit the URL to get there... They clearly don't want any noobs haha I just built from source and it works very nicely, very minimalist.
Aug 20 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Aug 21, 2013 at 01:33:34AM +0200, John Colvin wrote:
 On Tuesday, 20 August 2013 at 19:41:13 UTC, H. S. Teoh wrote:
On Tue, Aug 20, 2013 at 08:57:35PM +0200, pjmp wrote:
[...]
Programming like the 70's, yo!  :)
[...] LOL... to be honest, my PC "desktop" is more like a glorified terminal shell than anything else, in spite of the fact that I'm running under X11. My window manager is ratpoison, which is completely keyboard-based (hence the name), maximizes all windows by default (no tiling / overlapping), and has no window decorations. I don't even use the mouse except when using the browser or selecting text for cut/paste. (And if I had my way, I'd write a keyboard-only graphical browser that didn't depend on the mouse. I'd use Elinks instead, except that viewing images in a text terminal is rather a hassle, and there *is* a place for graphics when you need to present non-textual information -- I just don't think it's necessary when I'm dealing mostly with text anyway.)
[...]
 I think you might enjoy https://github.com/conformal/xombrero/
 
 snapshots here: https://opensource.conformal.com/snapshots/xombrero/
 although they're a month or so old and I had to edit the URL to get
 there... They clearly don't want any noobs haha
 
 I just built from source and it works very nicely, very minimalist.
Interesting, I'll have to take a look at this sometime when I have some free time. Thanks! In any case, I hope the keyboard interface isn't just something tacked on, as it's not as simple as it may look to design a keyboard interface that's efficient on a webpage, where you need to balance between navigating logical structure to find links, and providing visual navigation keys (ala Opera <=12). This is one aspect where I find Elinks lacking -- in a multicolumn layout the keys for navigating links leave a lot to be desired. T -- What doesn't kill me makes me stranger.
Aug 20 2013
parent "Ramon" <spam thanks.no> writes:
On Tuesday, 20 August 2013 at 23:44:37 UTC, H. S. Teoh wrote:
 On Wed, Aug 21, 2013 at 01:33:34AM +0200, John Colvin wrote:
 I think you might enjoy https://github.com/conformal/xombrero/
 
 snapshots here: 
 https://opensource.conformal.com/snapshots/xombrero/
 although they're a month or so old and I had to edit the URL 
 to get
 there... They clearly don't want any noobs haha
 
 I just built from source and it works very nicely, very 
 minimalist.
Interesting, I'll have to take a look at this sometime when I have some free time. Thanks! In any case, I hope the keyboard interface isn't just something tacked on, as it's not as simple as it may look to design a keyboard interface that's efficient on a webpage, where you need to balance between navigating logical structure to find links, and providing visual navigation keys (ala Opera <=12). This is one aspect where I find Elinks lacking -- in a multicolumn layout the keys for navigating links leave a lot to be desired.
Nope. Don't worry. xombrero (formerly named "xxxterm") is seriously keyboard driven. I happen to use it because I dont want the whole web bloat. It also offers quite interesting control through a simple keyboard interface. From what I have read so far from you you gonna love it.
Aug 20 2013
prev sibling next sibling parent "Tyler Jameson Little" <beatgammit gmail.com> writes:
On Tuesday, 20 August 2013 at 19:41:13 UTC, H. S. Teoh wrote:
 On Tue, Aug 20, 2013 at 08:57:35PM +0200, pjmp wrote:
 On Tuesday, 20 August 2013 at 14:35:19 UTC, H. S. Teoh wrote:
On Tue, Aug 20, 2013 at 11:19:27AM +0200, Chris wrote:
[...]
One thing that is usually not mentioned in articles about D 
is that
you don't need an IDE to develop in D. This was, if I 
remember it
correctly, one of the design goals.
Was it a design goal? If so, kudos to Walter. :) Because one of my criteria for a better programming language when I decided that I was fed up with C++ and needed something better, was that it must not have undue reliance on an IDE or some other external tool to be usable. Thus, Java was disqualified (too much boilerplate that can't be adequately handled without an IDE -- of course, there were other factors, but this was a big one). It must be usable with just a text editor and a compiler. D fit that criterion rather nicely. :) T
Programming like the 70's, yo! :)
[...] LOL... to be honest, my PC "desktop" is more like a glorified terminal shell than anything else, in spite of the fact that I'm running under X11. My window manager is ratpoison, which is completely keyboard-based (hence the name), maximizes all windows by default (no tiling / overlapping), and has no window decorations. I don't even use the mouse except when using the browser or selecting text for cut/paste. (And if I had my way, I'd write a keyboard-only graphical browser that didn't depend on the mouse. I'd use Elinks instead, except that viewing images in a text terminal is rather a hassle, and there *is* a place for graphics when you need to present non-textual information -- I just don't think it's necessary when I'm dealing mostly with text anyway.) I experimented with various ratpoison setups, and found that the most comfortable way was to increase my terminal font size so that it's approximately 80 columns wide (70's style ftw :-P), and however tall it is to fill the screen. I found that I'm most productive this way -- thanks to Vim's split-screen features and bash's backgrounding features, I find that I can do most of my work in a single terminal or two, and another background window for the browser. Since I don't even need to move my right hand to/from the mouse, I can get things done *fast* without needing a 6GHz CPU with 16GB of RAM -- a Pentium would suffice if I hadn't needed to work with CPU-intensive processes like povray renders or brute-force state space search algorithms. :) OTOH, I find that my productivity drops dramatically when I'm confronted with a GUI. I honestly cannot stand working on Windows because of this. *Everything* depends on the mouse and traversing endless layers of nested menus just to do something simple, and almost nothing is scriptable unless specifically designed for it (which usually suffers from many limitations in how you can use it between different applications). Give me the Unix command-line any day, thank you very much. So yes, I'm truly a relic from the 70's. ;-) T
Haha, I'm the same, except with XMonad. I use tmux to separate projects, and I have xmobar running so I can watch CPU and RAM explode when I run DMD =D I've tried using Java, but it just doesn't work in this config. Eclipse happens to suck in my WM and javac is terrible to use on the CLI for big projects (and I refuse to use Ant...). I stick to simple languages that don't need an IDE: D, Go, C, Python, Javascript, Rust, and sometimes Haskell. I would say I'm a relic from the 70's, but I wasn't born until the 80s...
Aug 21 2013
prev sibling parent "Wyatt" <wyatt.epp gmail.com> writes:
On Tuesday, 20 August 2013 at 19:41:13 UTC, H. S. Teoh wrote:
 I'd use Elinks instead, except that viewing images
 in a text terminal is rather a hassle
I don't really want to meddle too much in domestic matters involving a guy and his lawfully wedded config (:P), but have you tried w3m? When I had images enabled before, it just sort of...rendered them. Like over top of the xterm. It was pretty cool; I think it even flowed text around them properly. -Wyatt
Aug 21 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/19/13, Ramon <spam thanks.no> wrote:
    Plus UTF, too. Even UTF-8, 16 (a very practical compromise in
 my minds eye because with 16 bits one can deal with *every*
 language while still not wasting memory).
UTF-8 can deal with every language as well. But perhaps you meant something else here. Anyway welcome aboard!
Aug 20 2013
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Tuesday, 20 August 2013 at 12:59:13 UTC, Andrej Mitrovic wrote:
 On 8/19/13, Ramon <spam thanks.no> wrote:
    Plus UTF, too. Even UTF-8, 16 (a very practical compromise 
 in
 my minds eye because with 16 bits one can deal with *every*
 language while still not wasting memory).
UTF-8 can deal with every language as well. But perhaps you meant something else here. Anyway welcome aboard!
I think he meant that every "modern spoken/written" language fits in the "Basic Multilingual Plane", for which each codepoint fits in a single UTF16 code unit (2 bytes). Multiple codeunit uncodings in UTF-16 are *very* rare. On the other hand, if you encode japanese into UTF-8, then you'll spend *3* bytes per codepoint, ergo, "wasted memory". Ramon: I think that is a fallacy: http://en.wikipedia.org/wiki/UTF-8#Compared_to_UTF-16 Real world usage is *dominated* by ASCII chars. Unless you have a very specific use case, then, UTF8 will occupy *less* room than UTF16, even if it contains a lot of foreign characters. Furthermore, UTF-8 is pretty much the "standard". If you keep UTF-16, you will probably end up regularly transcoding to UTF-8 to interface with char* functions. Arguably, the "only" (IMO) usecase for UTF-16, is interfacing with windows' UCS-2 API. But even then, there'll still be some overhead, to make sure you don't have any dual-encoded in your streams.
Aug 20 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, August 20, 2013 14:43:09 H. S. Teoh wrote:
 And now I know who to blame^W I mean, praise, for the names of .front
 and .back:
Well, those _are_ the names that the STL uses for the same thing on containers. As such, they're likely the first names that I would have come up with. - Jonathan M Davis
Aug 20 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Aug 22, 2013 at 07:16:09PM +0200, John Colvin wrote:
 On Thursday, 22 August 2013 at 16:46:46 UTC, H. S. Teoh wrote:
On Thu, Aug 22, 2013 at 05:50:49PM +0200, John Colvin wrote:
[...]
If I was managing a D based team, I would definitely make use of
safe/system for code reviews. Any commit that touches  system code*
would have to go through an extra stage or something to that effect.
Are you sure about that? import std.stdio; void main() safe { writeln("abc"); } DMD says: /tmp/test.d(3): Error: safe function 'D main' cannot call system function 'std.stdio.writeln!(string).writeln' SafeD is a nice concept, I agree, but we have a ways to go before it's usable.
[...]
 Fair point. Why is that writeln can't be  trusted?
On Thu, Aug 22, 2013 at 07:16:48PM +0200, John Colvin wrote: [...]
 In the case of a string, that is.
That's a very good question. :) As an experiment, I just tried putting safe on std.stdio.File.writeln, which led to needing safe on write(), then lockingTextWriter, and ultimately to std.range.put. Now AFAIK, the compiler should be inferring attributes like safe for std.range.put if it is actually safe, but I didn't look deeper for the underlying cause. In any case, if this isn't already in bugzilla it should be. This isn't the only instance of issues with SafeD, though. Currently, there are many things that *should* be safe, but aren't. We could, in theory, just slap trusted on them and call it a day, but I'd much rather we be careful with that and only use trusted where we can actually prove the code's trustworthiness (i.e., not in template functions that call an arbitrary type's popFront method, which, in theory, could do *anything*). T -- There are three kinds of people in the world: those who can count, and those who can't.
Aug 22 2013
prev sibling next sibling parent reply Gour <gour atmarama.net> writes:
On Mon, 19 Aug 2013 22:18:04 +0200
"Ramon" <spam thanks.no> wrote:

 Sorry, this is a long and big post. But then, so too is my way=20
 that led me here; long, big, troublesome. And I thought that my=20
 (probably not everyday) set of needs and experiences might be=20
 interesting or useful for some others, too.
Thank you very much for this post. I was considering to use D for quite some time for multi-platform gui project, but was not satisfied with the state of its GUI bindings (only gtkd although someone was working on wx bindings, but, afaik, nothing happened) as well as non-stability of the language itself. That led me to research and try some other languages, starting with Cobra). On the other end of the spectrum I've tried some obscure ones like Nimrod and finally considered Ada as the most robust/safe option with decent options for GUI (GTK & Qt). Your post and another thread 'DQuick a GUI Library (prototype)' makes me optimistic that it would be possible to use D as the 'general programming language' sutiable for writing GUI apps as well. Sincerely, Gour --=20 A person who is not disturbed by the incessant flow of=20 desires =E2=80=94 that enter like rivers into the ocean, which is=20 ever being filled but is always still =E2=80=94 can alone achieve=20 peace, and not the man who strives to satisfy such desires. http://www.atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810
Aug 23 2013
parent reply "Ramon" <spam thanks.no> writes:
On Friday, 23 August 2013 at 07:15:49 UTC, Gour wrote:
 On Mon, 19 Aug 2013 22:18:04 +0200
 "Ramon" <spam thanks.no> wrote:

 Sorry, this is a long and big post. But then, so too is my way 
 that led me here; long, big, troublesome. And I thought that 
 my (probably not everyday) set of needs and experiences might 
 be interesting or useful for some others, too.
Thank you very much for this post. I was considering to use D for quite some time for multi-platform gui project, but was not satisfied with the state of its GUI bindings (only gtkd although someone was working on wx bindings, but, afaik, nothing happened) as well as non-stability of the language itself. That led me to research and try some other languages, starting with (even Cobra). On the other end of the spectrum I've tried some obscure ones like Nimrod and finally considered Ada as the most robust/safe option with decent options for GUI (GTK & Qt). Your post and another thread 'DQuick a GUI Library (prototype)' makes me optimistic that it would be possible to use D as the 'general programming language' sutiable for writing GUI apps as well.
Now, careful, Gour What I wrote was written with close to zero D experience and largely based on spec, gut feeling (W. Bright implements languages since 2+ decades and says straightout that he approached from a pragmatic view; which in my book counts as a bik +) and some logical verification. And it was said from someone who wants at least a large part of Eiffels goodness in a more C/C++ way and look and feel and practical useability. So, this might be pretty far away from what you consider reasonable, important, etc. Yes, I think that D lends itself very well to GUI programming and, more importantly (to me) it's one of the *very* few languages in which a useful, professional and soundly designed GUI lib could be implemented with adequate and reasonable efforts. A warning though (and one I tried to get written in red permanent marker like letters in my own head) when studying D somewhat more (along the D book): Don't underestimate D! One might be led to look at it as some kind of "easier C++ and better done"; it is not. Or, more correctly, It is way more than that and offers a lot of freedom paradigmwise. If GUI is very important to you it might also be useful to look at a small GUI (like lus'a IUP) and tinker along the lines of how this would, could, and should be done in D and at how it was actually done e.g. with the gtk binding. I hope you'll enjoy D as much as I'm beginning to do ;)
Aug 23 2013
parent Gour <gour atmarama.net> writes:
On Fri, 23 Aug 2013 15:35:19 +0200
"Ramon" <spam thanks.no> wrote:

 If GUI is very important to you it might also be useful to look
 at a small GUI (like lus'a IUP) and tinker along the lines of how
 this would, could, and should be done in D and at how it was
 actually done e.g. with the gtk binding.
I explored that path - see e.g.: http://article.gmane.org/gmane.comp.lib.iup.user/368 so it's a no got for multi-platform app with required i18n support.
 I hope you'll enjoy D as much as I'm beginning to do ;)
Let's see... Sincerely, Gour -- For him who has conquered the mind, the mind is the best of friends; but for one who has failed to do so, his mind will remain the greatest enemy. http://www.atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810
Aug 23 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Aug 23, 2013 at 08:33:55AM +0200, PauloPinto wrote:
 On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
On Thu, Aug 22, 2013 at 10:10:36PM +0200, Ramon wrote:
[...]
Probably making myself new enemies I dare to say that gui, colourful
and generally graphics is the area of lowest quality code.
All areas are bad, given the way software projects are managed. The consulting projects I work on, are for Fortune 500 companies, always with at least three development sites and some extent of off-shoring work. GUI, embedded, server, database, it doesn't matter. All code is crap given the amount of time, money and developer quality assigned to the projects.
You're right. All (enterprise) code is bad, 'cos the deadlines force you to use a hack solution and do a slip-shod job, because otherwise you'd be fired for continually failing to meet the deadline. And it's a vicious cycle. The first wave of programmers are forced to write bad code because of the unreasonable timeline, then the unfortunates who inherit that code are put on the same unreasonable timeline and now they have no time to understand the already-badly-written code, and so they can only write worse code on top of that. A few more waves of programmers after that, and the code is in such a sorry state that nobody even cares anymore, 'cos any effort at improvement is futile. Unless you're one of the braves who dare to junk the whole thing (or more realistically, a particularly bad module) and rewrite it from scratch. Of course, the new, better code only remains good for so long, before it starts to deteriorate too. But IME, the worst offender is still the GUI-related component. Why this is so, I can't really say. But there's a definite pattern that anything to do with the GUI component, or anything to do with javascript, tends to devolve into a horrid mess faster than, say, database-related backend code.
 Usually the top developers in the teams try to save the code, but
 there is only so much one can do, when the ratio between both classes
 of developers so big is as a way to make the projects profitable.
 
 So the few heroes that at the beginning of each project try to fix the
 situation, eventually give around the middle of the project.
Yeah, and it doesn't help when upper management shuffles people around with no regard as to which projects they are most familiar with (and therefore most productive in). We're just expendible pawns on their chessboard, and if they see fit to sacrifice us to a sinking project for the sake of winning one last sale before they ditch the project outright, who are we to object? Or, you get the reputation that you're one of the exceptional coders, and suddenly there's the expectation that they can dump any project, no matter how badly written, and you'll be able to fix it up in 2 days and deliver a shining, working product in 3. Which, of course, necessitates last-ditch hacks and workarounds and lots of untested code, 'cos there's no way anyone can physically accomplish such feats in the amount of time given.
 The customers don't care as long as the software works as intended.
Yeah, and that's where it really sucks. People have come to expect that software needs to reboot every other day, and that it's "normal" for a large GUI app to crash every now and then. I wouldn't sit in a car that randomly fails in the middle of the road every other day, much less buy one, yet people would shell out gobs of cash for badly-written software. I mean, yeah they will complain when it does crash, but they'll still willingly shell out more money to pay for fixing what shouldn't have happened in the first place. [...]
LOL... totally sums up my sentiments w.r.t. GUI-dependent apps. :)

I saw through this façade decades ago when Windows 3.1 first came
out, and I've hated GUI-based OSes ever since. I stuck to DOS as long
as I could through win95 and win98, and then I learned about Linux
and I jumped ship and never looked back. But X11 isn't that much
better...  there are some pretty bloated X11 apps that crash twice a
day, too.
Funny, I have a different experience. Before replacing my ZX Spectrum with a PC, I already knew Amiga and Atari ST systems. And IDEs on those environments as well. So I always favored GUI environments over CLI. For me, personally, the CLI is good when doing system administration, or programming related tasks that can benefit from the usual set of tricks with commands and pipes. For everything else nothing like keyboard+mouse and a nice GUI environment. Personal opinion, to each its own.
[...] Yes, to each his own. :) I still heavily prefer CLI-based apps, and I still think many tasks aren't *inherently* graphical and therefore doesn't need to be handled in a graphical way. There are tasks for which GUIs are more appropriate, of course -- image/video editing, data visualization, etc.. But for me, it's CLI by default, and GUI only when necessary, whereas for most people, it's the other way round. *shrug* On Fri, Aug 23, 2013 at 12:34:07PM +0200, Chris wrote:
 On Thursday, 22 August 2013 at 20:10:37 UTC, Ramon wrote:
[...]
 It's the "bolt-on" mentality that ruins things, which is partly due
 to deadlines. As one guy once said to me (after I had complained
 about a quick and dirty implementation) "We have the choice between
 doing it right and doing it right NOW!" Ain't no more to say.
Yeah, it's a number of factors that add up essentially to "we don't have enough time to write this ourselves, but library X already kinda does what we want, so let's use it! Besides, it looks cooler, so the customer will like it better! Nevermind the fact that it's not really compatible with the other libraries we're currently using, but who cares? As long as the customer gets to see that we're on top of the latest hype, they'll be more willing to forgive us of inherent bugs we don't know how to fix. We can always just write workarounds to hide the problem anyway -- it's faster than trying to fix the root cause. Just stick those 's in front of every line in the PHP code, and the users won't even see the errors! We'll just insert a while(true) loop in this thread somewhere so it looks like the software is doing something useful afterwards, so they won't even know it crashed!" [...]
 And last but not least, a programmer can work for hours on end
 implementing a clever algorithm, using unit tests, component
 programming etc etc. Nobody will ever notice. If users see a button
 that when they press it the screen says "Hello User!", they are
 forever happy. What goes on under the hood is "boys and their toys".
Yeah, doing it "right" is rarely ever appreciated. I've had people tell me, why do it the right way? I've already written it the wrong way and it *appears* to work and nobody can tell the difference anyway, so who cares? Only once in a rare while, doing things right actually has visible effects... like that one time when I rewrote a shell script that does data analysis (a *shell script* that does data analysis!) in Perl, resulting in a performance improvement from 2+ days to 2 minutes. (It wasn't the language per se that made the huge difference, though it helped -- it was the fact that the shell script used grep and awk in a way that resulting in an O(n^2) algorithm, whereas the Perl script uses an O(n) algorithm.) But such occasions are rare. T -- The volume of a pizza of thickness a and radius z can be described by the following formula: pi zz a. -- Wouter Verhelst
Aug 23 2013
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ramon:

 One obious (or seemingly obvious) solution was Ada. Well, no, 
 it wasn't. Maybe, even probably, if I had to develop low level 
 stuff for embedded stuff but not for a large application. And, 
 that was a killer for me, Ada does not really support easily 
 resizable arrays. To make things worse, while there nowadays is 
 Gnat, a nice modern IDE, there is a major lack of libraries.

 Falling over the famous Ariane 5 article
The Ariane 5 failure shows that Ada programs are not perfect/infallible. But that's true for most languages :-) Despite Ada has several problems and flaws, it still has some advantages over D. This is one of the hundreds of RosettaCode tasks written in Ada: http://rosettacode.org/wiki/Universal_Turing_machine#Ada If you compare that Ada version with my laboriously written D entry (and you can perform a similar comparison with several other tasks in that site) you see how the D entry replaces tons of tests done statically by the Ada program with run-time tests, like: this(const ref TuringMachine tm_) { immutable errMsg = "Invalid input."; enforce(!tm_.runningStates.empty, errMsg); enforce(!tm_.haltStates.empty, errMsg); enforce(!tm_.symbols.empty, errMsg); enforce(tm_.rules.length, errMsg); enforce(tm_.runningStates.canFind(tm_.initialState), errMsg); enforce(tm_.symbols.canFind(tm_.blank), errMsg); const allStates = tm_.runningStates ~ tm_.haltStates; foreach (const s; tm_.rules.keys.to!(dchar[])().sort()) enforce(tm_.runningStates.canFind(s), errMsg); foreach (const aa; tm_.rules.byValue) foreach (/*const*/ s, const rule; aa) { enforce(tm_.symbols.canFind(s), errMsg); enforce(tm_.symbols.canFind(rule.toWrite), errMsg); enforce(allStates.canFind(rule.nextState), errMsg); } Despite those tests done by the Ada entry are semantically simple, they are numerous and good, and I have used a lot of energy to write the most statically safe D entry. Probably working even more you can make the D entry a bit more statically safe (eventually you could reach the level of Ada code) but the amount of work and code becomes excessive, and the resulting D code becomes unnatural, and rather not idiomatic. On this Ada is still a winner :-) Bye, bearophile
Aug 25 2013
next sibling parent reply "Paulo Pinto" <pjmp progtools.org> writes:
On Sunday, 25 August 2013 at 15:06:28 UTC, bearophile wrote:
 Ramon:

 One obious (or seemingly obvious) solution was Ada. Well, no, 
 it wasn't. Maybe, even probably, if I had to develop low level 
 stuff for embedded stuff but not for a large application. And, 
 that was a killer for me, Ada does not really support easily 
 resizable arrays. To make things worse, while there nowadays 
 is Gnat, a nice modern IDE, there is a major lack of libraries.

 Falling over the famous Ariane 5 article
The Ariane 5 failure shows that Ada programs are not perfect/infallible. But that's true for most languages :-)
Ariane 5 just showed what happens when engineers reuse code without proper testing. It could happen in any language. -- Paulo
Aug 25 2013
parent reply "Ramon" <spam thanks.no> writes:
Well, I had good reason not to mention Ariane5. Looking at that
particular problem, D would have helped, too and roughly in the
same way as Eiffel that is, by doing some debug runs with the
current (Ariane5) values; then dbc could have helped spot the
problem.

I did, btw, not at all intend to bash Ada. From what I can see,
Ada is well alive and has considerably more users than Eiffel
and, at least for the time being, D. Looking at the type of user,
usually not newbies but experienced software engineers, tells me
that Ada is alive and well for a reason.

As for language comparisons or shoot outs, I don't care that
much. I tend to look whether a language has a sound
implementation of some of the major concepts and paradigms,
whether it's consistent(ly implemented) and whether it has
dropped out of academia or rather had an evolutionary birth,
preferably with lots of experience behind it.

And yes, one point I keep in mind is what (in my minds eye) is
the point to be learned from Ariane5: Humans aren't good at micro
bookkeeping large numbers of details - computers are.
Aug 25 2013
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Ramon:

 As for language comparisons or shoot outs, I don't care that
 much.
In your first post of this thread you have listed some things you like, some things you don't like, listed points (Arrays, Strings, DBC, modern concurrency, GUI, "defer mechanism", Genericity) you have even assigned points. So while you have not written a direct language comparison, you have analysed specific points. In my answer I've given example that shows why Ada has still something good to offer that's lacking in D. Language comparisons can be sterile, or they can be a chance to learn. Bye, bearophile
Aug 25 2013
parent "Ramon" <spam thanks.no> writes:
On Sunday, 25 August 2013 at 23:00:21 UTC, bearophile wrote:
 Ramon:

 As for language comparisons or shoot outs, I don't care that
 much.
In your first post of this thread you have listed some things you like, some things you don't like, listed points (Arrays, Strings, DBC, modern concurrency, GUI, "defer mechanism", Genericity) you have even assigned points. So while you have not written a direct language comparison, you have analysed specific points. In my answer I've given example that shows why Ada has still something good to offer that's lacking in D. Language comparisons can be sterile, or they can be a chance to learn.
I think we have a misunderstanding here. I'm *not* against Ada and I think that, of course, comparing languages to a degree is a good thing and sometimes a necessity. I just don't think that slocs or "language A does it in x ms while language B needs y ms" does tell me a lot about a language. Referring to Walter Bright in part, I see that the major problem with server software usually is *not* performance but reliability and safety. Furthermore usually the game is influenced by quite different factors such as the concurreny model chosen. You might well end up having some Lua (interpreted!) server beating the sh*t out of apache (C/C++) simply because the former uses, say, libevent based AIO while the latter uses processes. *That* actually was a major criterion for me (I happen to work a lot in the server world): I wanted a language that offers a sound and elegant "multitask" implementation (à la Ada) as well as *easy* and comfortable access to AIO (which e.g. in Eiffel carries a considerable performance penalty by using the agent model). When I discovered vibe.d I smiled and thought "Of course! It comes almost naturally for a D guy to implement that". What a nice proof of D's capabilities.
Aug 25 2013
prev sibling parent reply Gour <gour atmarama.net> writes:
On Sun, 25 Aug 2013 17:06:27 +0200
"bearophile" <bearophileHUGS lycos.com> wrote:

 Probably working even more you can make the D entry a bit more
 statically safe (eventually you could reach the level of Ada code) but
 the amount of work and code becomes excessive, and the resulting D
 code becomes unnatural, and rather not idiomatic.
Still considering whether to focus on Ada or D for my project, I wonder if D can do stuff like (from wikipedia page): type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24; type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); type Date is record Day : Day_type; Month : Month_type; Year : Year_type; end record; subtype Working_Hours is Hours range 0 .. 12; subtype Working_Day is Weekday range Monday .. Friday; Work_Load: constant array(Working_Day) of Working_Hours := (Friday => 6, Monday => 4, others => 10); and ensure type-safety for such custom types? Sincerely, Gour -- You have a right to perform your prescribed duty, but you are not entitled to the fruits of action. Never consider yourself the cause of the results of your activities, and never be attached to not doing your duty. http://www.atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810
Aug 29 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 29 August 2013 at 13:33:52 UTC, Gour wrote:
 On Sun, 25 Aug 2013 17:06:27 +0200
 "bearophile" <bearophileHUGS lycos.com> wrote:

 Probably working even more you can make the D entry a bit more
 statically safe (eventually you could reach the level of Ada 
 code) but
 the amount of work and code becomes excessive, and the 
 resulting D
 code becomes unnatural, and rather not idiomatic.
Still considering whether to focus on Ada or D for my project, I wonder if D can do stuff like (from wikipedia page): type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24; type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); type Date is record Day : Day_type; Month : Month_type; Year : Year_type; end record; subtype Working_Hours is Hours range 0 .. 12; subtype Working_Day is Weekday range Monday .. Friday; Work_Load: constant array(Working_Day) of Working_Hours := (Friday => 6, Monday => 4, others => 10); and ensure type-safety for such custom types? Sincerely, Gour
just something I whipped up in a few mins: import std.typecons; import std.exception; struct Limited(T, T lower, T upper) { T _t; mixin Proxy!_t; //Limited acts as T (almost) invariant() { enforce(_t >= lower && _t <= upper); } this(T t) { _t = t; } } auto limited(T, T lower, T upper)(T init = T.init) { return Limited!(T, lower, upper)(init); } unittest { enum l = [-4,9]; auto a = limited!(int, l[0], l[1])(); foreach(i; l[0] .. l[1]+1) { a = i; } assertThrown({a = -5;}()); assertThrown({a = 10;}()); } This could be a lot more generic than it is. Redesigning Restricted to hold a pointer to a function that does the check would be one way.
Aug 29 2013
next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 29 August 2013 at 14:13:07 UTC, John Colvin wrote:
 This could be a lot more generic than it is. Redesigning 
 Restricted to hold a pointer to a function that does the check 
 would be one way.
Sorry, should read "Limited" not "Restricted" there
Aug 29 2013
prev sibling next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 29/08/13 16:13, John Colvin wrote:
 struct Limited(T, T lower, T upper)
 {
      T _t;
      mixin Proxy!_t; //Limited acts as T (almost)
      invariant()
      {
          enforce(_t >= lower && _t <= upper);
      }
      this(T t)
      {
          _t = t;
      }
 }
Is the invariant() not going to be stripped out at compile time if you use -release ?
Aug 29 2013
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Joseph Rushton Wakeling:

 Is the invariant() not going to be stripped out at compile time 
 if you use -release ?
Right. assert is enough there. Use enforce() only in special cases, when you need it. Better to minimize the usage of enforce() in library code that has to be called many times. Bye, bearophile
Aug 29 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 29 August 2013 at 14:37:10 UTC, bearophile wrote:
 Joseph Rushton Wakeling:

 Is the invariant() not going to be stripped out at compile 
 time if you use -release ?
Right. assert is enough there. Use enforce() only in special cases, when you need it. Better to minimize the usage of enforce() in library code that has to be called many times. Bye, bearophile
fair point. I had forgotten that invariant would be stripped out anyway.
Aug 29 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 29 August 2013 at 14:34:37 UTC, Joseph Rushton 
Wakeling wrote:
 On 29/08/13 16:13, John Colvin wrote:
 struct Limited(T, T lower, T upper)
 {
     T _t;
     mixin Proxy!_t; //Limited acts as T (almost)
     invariant()
     {
         enforce(_t >= lower && _t <= upper);
     }
     this(T t)
     {
         _t = t;
     }
 }
Is the invariant() not going to be stripped out at compile time if you use -release ?
sadly, yes. We need a release version of them, just like we have enforce and assert. Unfortunately in this case it won't be a library solution and will need compiler support.
Aug 29 2013
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 29/08/13 16:41, John Colvin wrote:
 sadly, yes. We need a release version of them, just like we have enforce and
 assert. Unfortunately in this case it won't be a library solution and will need
 compiler support.
You missed my recent thread here, then, and the responses ... :-) I was going to add earlier: you could probably handle this with a rewrite of what Proxy does, but adding the constraint check inside the opDispatch code. Still, I think Gour has a point about Ada's attractiveness if those kinds of value safety checks are a first-class part of the language.
Aug 29 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 29 August 2013 at 14:50:58 UTC, Joseph Rushton 
Wakeling wrote:
 On 29/08/13 16:41, John Colvin wrote:
 sadly, yes. We need a release version of them, just like we 
 have enforce and
 assert. Unfortunately in this case it won't be a library 
 solution and will need
 compiler support.
You missed my recent thread here, then, and the responses ... :-) I was going to add earlier: you could probably handle this with a rewrite of what Proxy does, but adding the constraint check inside the opDispatch code. Still, I think Gour has a point about Ada's attractiveness if those kinds of value safety checks are a first-class part of the language.
opDispatch isn't enough, you need to add to all the operators too. Shouldn't be too hard. I think there's actually quite a lot more D can do in this regard, it's something I've been playing around with for a while. When I have some free time I might look in to it again.
Aug 29 2013
parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 29/08/13 17:03, John Colvin wrote:
 opDispatch isn't enough, you need to add to all the operators too. Shouldn't be
 too hard.
Ahh, you mean all the other op*'s? :-) I guess as you say not hard, I find it a shame that it seems quite finnicky and there is quite a lot of manual work involved.
 I think there's actually quite a lot more D can do in this regard, it's
 something I've been playing around with for a while. When I have some free time
 I might look in to it again.
That would be great to see, especially if it can be made properly generic -- that is, that the constraints can be arbitrary or even functional (e.g. ensuring initialization of a struct before it is used).
Aug 29 2013
prev sibling parent reply Gour <gour atmarama.net> writes:
On Thu, 29 Aug 2013 16:13:06 +0200
"John Colvin" <john.loughran.colvin gmail.com> wrote:

 just something I whipped up in a few mins:
[...] Thanks. So, it's possible, but (maybe) it's not as elegant. Sincerely, Gour --=20 A person is said to be established in self-realization and is called a yog=C4=AB [or mystic] when he is fully satisfied by virtue of acquired knowledge and realization. Such a person is situated in transcendence and is self-controlled. He sees everything =E2=80=94 whether it be pebbles, stones or gold =E2=80=94 as the same. http://www.atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810
Aug 29 2013
parent "Ramon" <spam thanks.no> writes:
On Thursday, 29 August 2013 at 14:50:18 UTC, Gour wrote:
 On Thu, 29 Aug 2013 16:13:06 +0200
 "John Colvin" <john.loughran.colvin gmail.com> wrote:

 just something I whipped up in a few mins:
[...] Thanks. So, it's possible, but (maybe) it's not as elegant.
Now, let's be fair. While the point you brought up is one that I, too, do very much like with Ada, it's not magic what Ada does. Agreed, Ada has it wrapped in nice syntactic sugar, but in the end such a subtype is just the basic type with range checking done bhind the curtain, while D does it publicly visible and in the open (and, yes, less nicely sugared). In the end it's about the concept, so the relevant question is not "Does D offer the same sugar and wardrobe?" but "does D offer a way to implement that (important) concept other than (C-like) hand inserting range checking code everywhere". The answer is "yes, it does". A by far bigger concern in my minds eye is D's somewhat unlucky DbC mechanism or, more precisely, the somewhat step-son treating by disabling DbC in realease code. I strongly feel that something like " DbC" (and noDbC) would be far more satisfying. A+ - R
Aug 29 2013
prev sibling parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
W dniu 29.08.2013 15:33, Gour pisze:
 On Sun, 25 Aug 2013 17:06:27 +0200
 "bearophile" <bearophileHUGS lycos.com> wrote:

 Probably working even more you can make the D entry a bit more
 statically safe (eventually you could reach the level of Ada code) but
 the amount of work and code becomes excessive, and the resulting D
 code becomes unnatural, and rather not idiomatic.
Still considering whether to focus on Ada or D for my project, I wonder if D can do stuff like (from wikipedia page): type Day_type is range 1 .. 31; type Month_type is range 1 .. 12; type Year_type is range 1800 .. 2100; type Hours is mod 24;
These are refinement types (I call them 'views') and I have half-written DIP for this. However, I doubt that it will be accepted. I would rather enable 'View pattern' by allowing contracts and invariants in release mode. They still could be prepended with debug directive to establish the old behaviour: struct S { debug invariant() { ... } } void fn() debug in { ... } debug out(result) { ... } body { ... }
Aug 29 2013
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Piotr Szturmaj:

 These are refinement types (I call them 'views') and I have 
 half-written DIP for this. However, I doubt that it will be 
 accepted.
I'll be quite interested by such DIP. Even if your DIP will be refused, it could still produce several useful consequences. Bye, bearophile
Aug 29 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/29/2013 09:17 PM, bearophile wrote:
 Piotr Szturmaj:

 These are refinement types (I call them 'views') and I have
 half-written DIP for this. However, I doubt that it will be accepted.
I'll be quite interested by such DIP. Even if your DIP will be refused, it could still produce several useful consequences.
Why not build something rather general? struct Hour{ int hour; IsTrue[0 <= hour && hour <= 23] proof; } terminating correct pure nothrow safe{ // additional attributes // allow the compiler to erase proofs at run time in a modular fashion // to maintain efficiency. alias IsTrue[bool a] = Prop[a === true]; // (selection of intrinsic facts about built-in operators) IsTrue[a && b] conj[bool a,bool b](IsTrue[a] x, IsTrue[b] y); IsTrue[a] projA[bool a,bool b](IsTrue[a && b] ab); IsTrue[b] projB[bool a,bool b](IsTrue[a && b] ab); IsTrue[a <= c] letrans[int a,int b,int c] (IsTrue[a <= b] aleb, IsTrue[b <= c] blec); } struct WorkingHour{ int hour; IsTrue[0 <= hour && hour <= 12] proof; correct // meaning does not throw an error or segfault Hour toHour()out(r){assert(r.hour==hour);}body{// compile-time check return Hour(hour, // an explicit proof that the hour is actually in range conj(projA(proof), letrans(projB(proof), IsTrue[12 <= 23].init)) ); } alias toHour this; } (Maybe there is a better choice for the syntax.) The basic idea is to extend the type system slightly such that the compiler becomes able to type check proofs talking about program behaviour. A flow analysis ensures that proofs are properly updated. It would then become possible to build arbitrary refinement types: bool isPrime(int x){ return iota(3,x).all!(a=>!!(x%a)); } struct Prime{ int prime; IsTrue[isPrime(prime)] proof; } Prime seven = Prime(7,IsTrue[isPrime(7)].init); // proof by CTFE assert(isPrime(x)); auto foo = Prime(x,IsTrue[isPrime(x)].init); // proof by runtime check and flow analysis auto bar = Prime(y,IsTrue[isPrime(y)].init); // error, disabled this This could also be used without runtime overhead eg. inside a correctness proof for a prime sieve, though this necessitates a slightly larger apparatus than presented here.
Aug 29 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/30/2013 01:35 AM, Timon Gehr wrote:
 bool isPrime(int x){ return iota(3,x).all!(a=>!!(x%a)); }
bool isPrime(int x){ return 1<x && iota(3,x).all!(a=>!!(x%a)); }
Aug 30 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 30 August 2013 at 08:10:24 UTC, Timon Gehr wrote:
 On 08/30/2013 01:35 AM, Timon Gehr wrote:
 bool isPrime(int x){ return iota(3,x).all!(a=>!!(x%a)); }
bool isPrime(int x){ return 1<x && iota(3,x).all!(a=>!!(x%a)); }
iota(3, to!int(sqrt(x)))
Aug 30 2013
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/30/2013 11:14 AM, deadalnix wrote:
 On Friday, 30 August 2013 at 08:10:24 UTC, Timon Gehr wrote:
 On 08/30/2013 01:35 AM, Timon Gehr wrote:
 bool isPrime(int x){ return iota(3,x).all!(a=>!!(x%a)); }
bool isPrime(int x){ return 1<x && iota(3,x).all!(a=>!!(x%a)); }
iota(3, to!int(sqrt(x)))
http://en.wikipedia.org/wiki/AKS_primality_test
Aug 30 2013
prev sibling parent "Stephan Schiffels" <stephan_schiffels mac.com> writes:
Nice! I cannot anymore go through all the over 100 replies to 
this, sorry if someone else has suggested this:

You should write this article (tidied up a bit) in a blog or 
somewhere more public on the web! Here in this forum, things are 
not as public as they could be!
But thanks for sharing anyway!

Stephan


On Monday, 19 August 2013 at 20:18:06 UTC, Ramon wrote:

 Sorry, this is a long and big post. But then, so too is my way 
 that led me here; long, big, troublesome. And I thought that my 
 (probably not everyday) set of needs and experiences might be 
 interesting or useful for some others, too.
 And, of course, I confess it, I just feel like throwing a very 
 big "THANK YOU" at D's creators and makers. Thank you!
Aug 25 2013