www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - New slides about Go

reply bearophile <bearophileHUGS lycos.com> writes:
Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go":
http://go.googlecode.com/hg/doc/ExpressivenessOfGo.pdf

http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/

This time I think I have understood most of the contents of the slides :-)


Few interesting quotations:

From Page 18:

There are pointers but no pointer arithmetic
  - pointers are important to performance, pointer arithmetic not.
  - although it's OK to point inside a struct.
    - important to control layout of memory, avoid allocation
Increment/decrement (p++) are statements, not expressions.
  - no confusion about order of evaluation
Addresses last as long as they are needed.
  - take the address of a local variable, the implementation
    guarantees the memory survives while it's referenced.
No implicit numerical conversions (float to int, etc.).
  - C's "usual arithmetic conversions" are a minefield.


From page 19 and 20:

Constants are "ideal numbers": no size or sign, hence no L
or U or UL endings.

Arithmetic with constants is high precision.  Only when 
assigned to a variable are they rounded or truncated to fit.

A typed element in the expression sets the true type of the constant.


From page 40:

Goroutines have "segmented stacks":
   go f()
starts f() executing concurrently on a new (small) stack.
Stack grows and shrinks as needed.
No programmer concern about stack size.
No possibility for stack overflow.
A couple of instructions of overhead on each function call, a 
huge improvement in simplicity and expressiveness.


From page 46:

The surprises you discover will be pleasant ones.

--------------------

Some comments:

- In my D programs I sometimes use pointers, but pointer arithmetic is indeed
uncommon.
- Turning x++; into statements seems harsh, but indeed it solves some problems.
In practice in my D programs the ++ is often used as a statement, to avoid bugs.
- I think that "take the address of a local variable, the implementation
guarantees the memory survives while it's referenced." means that it gets
copied on the heap.
- Constants management in Go: seems cute.
- Segmented stack: allows to avoid some stack overflows at the price of a bit
of delay at calling functions.
- The comment from 46 refers to a language that is orthogonal, and I think it
is probably very correct. It's one of the main advantages of an orthogonal
design, you are free to create many combinations.

--------------------

On the Go site there is a "playground", similar to what Ideone and Codepad
sites offer for D2/D1. It contains some small programs, and you may modify them
and compile almost arbitrary Go code.


A little Go example in the playground shows closures and the comma/tuple syntax
similar to Python one:


package main

// fib returns a function that returns
// successive Fibonacci numbers.
func fib() func() int {
	a, b := 0, 1
	return func() int {
		a, b = b, a+b
		return b
	}
}

func main() {
	f := fib()
	// Function calls are evaluated left-to-right.
	println(f(), f(), f(), f(), f())
}


Something similar in D2, D lacks a handy unpacking syntax, and I think
currently it doesn't guarantee that functions get evaluated left-to-right:


import std.stdio: writeln;

int delegate() fib() {
    int a = 0;
    int b = 1;
    return {
        auto a_old = a;
        a = b;
        b = a_old + b;
        return b;
    };
}

void main() {
    auto f = fib();
    // function calls are not surely evaluated left-to-right
    writeln(f(), " ", f(), " ", f(), " ", f(), " ", f());
}



Another example on the Go site, that shows the segmented stacks at work:


// Peano integers are represented by a linked list
// whose nodes contain no data (the nodes are the data).
// See: http://en.wikipedia.org/wiki/Peano_axioms

// This program demonstrates the power of Go's
// segmented stacks when doing massively recursive
// computations.

package main

// Number is a pointer to a Number
type Number *Number

// The arithmetic value of a Number is the count of
// the nodes comprising the list.
// (See the count function below.)

// -------------------------------------
// Peano primitives

func zero() *Number {
	return nil
}

func isZero(x *Number) bool {
	return x == nil
}

func add1(x *Number) *Number {
	e := new(Number)
	*e = x
	return e
}

func sub1(x *Number) *Number {
	return *x
}

func add(x, y *Number) *Number {
	if isZero(y) {
		return x
	}
	return add(add1(x), sub1(y))
}

func mul(x, y *Number) *Number {
	if isZero(x) || isZero(y) {
		return zero()
	}
	return add(mul(x, sub1(y)), x)
}

func fact(n *Number) *Number {
	if isZero(n) {
		return add1(zero())
	}
	return mul(fact(sub1(n)), n)
}

// -------------------------------------
// Helpers to generate/count Peano integers

func gen(n int) *Number {
	if n > 0 {
		return add1(gen(n - 1))
	}
	return zero()
}

func count(x *Number) int {
	if isZero(x) {
		return 0
	}
	return count(sub1(x)) + 1
}

// -------------------------------------
// Print i! for i in [0,9]

func main() {
	for i := 0; i <= 9; i++ {
		f := count(fact(gen(i)))
		println(i, "! =", f)
	}
}




It's easy to translate it to D:

import std.stdio: writeln;

struct Number {
    Number* next;
    this(Number* ptr) { next = ptr; }
}

// -------------------------------------
// Peano primitives

Number* zero() {
    return null;
}

bool isZero(Number* x) {
    return x == null;
}

Number* add1(Number* x) {
    return new Number(x);
}

Number* sub1(Number* x) {
    return x.next;
}

Number* add(Number* x, Number* y) {
    if (isZero(y))
        return x;
    return add(add1(x), sub1(y));
}

Number* mul(Number* x, Number* y) {
    if (isZero(x) || isZero(y))
        return zero();
    return add(mul(x, sub1(y)), x);
}

Number* fact(Number* n) {
    if (isZero(n))
        return add1(zero());
    return mul(fact(sub1(n)), n);
}

// -------------------------------------
// Helpers to generate/count Peano integers

Number* gen(int n) {
    if (n <= 0)
        return zero();
    return add1(gen(n - 1));
}

int count(Number* x) {
    if (isZero(x)) {
        return 0;
    }
    return count(sub1(x)) + 1;
}

// -------------------------------------

void main() {
    foreach (i; 0 .. 11) {
        int f = count(fact(gen(i)));
        writeln(i, "! = ", f);
    }
}


But compiled normally on Windows leads to a stack overflow, you need to add a
-L/STACK:10000000

Bye,
bearophile
Oct 14 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 From page 40:
 
 Goroutines have "segmented stacks":
    go f()
 starts f() executing concurrently on a new (small) stack.
 Stack grows and shrinks as needed.
 No programmer concern about stack size.
 No possibility for stack overflow.
 A couple of instructions of overhead on each function call, a 
 huge improvement in simplicity and expressiveness.
There's a huge cost to it, however. You can't call C code directly anymore. Anyhow, this problem simply goes away with 64 bits. You can allocate each thread gigabytes of address space, faulting it in as required, and still be able to have billions of threads. Segmented stacks would have been a great idea 10 years ago.
Oct 14 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i97v5c$d1u$1 digitalmars.com...
 bearophile wrote:
 From page 40:

 Goroutines have "segmented stacks":
    go f()
 starts f() executing concurrently on a new (small) stack.
 Stack grows and shrinks as needed.
 No programmer concern about stack size.
 No possibility for stack overflow.
 A couple of instructions of overhead on each function call, a huge 
 improvement in simplicity and expressiveness.
There's a huge cost to it, however. You can't call C code directly anymore. Anyhow, this problem simply goes away with 64 bits.
Which is incredibly helpful on 32-bit systems.
Oct 14 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 You can't call C code directly anymore.
I don't know/understand why. Probably I just don't know how a segmented stack is implemented/structured. If the stack is growable, I presume there's some free space anyway at the end of it. So you are not forced to test for the stack length, so you may call a C function and hope it will not blow the free stack left. Probably I am wrong, but I don't know why :-) Bye, bearophile
Oct 14 2010
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 15 Oct 2010 02:04:59 +0300, bearophile <bearophileHUGS lycos.com>  
wrote:

 Walter:

 You can't call C code directly anymore.
I don't know/understand why. Probably I just don't know how a segmented stack is implemented/structured. If the stack is growable, I presume there's some free space anyway at the end of it. So you are not forced to test for the stack length, so you may call a C function and hope it will not blow the free stack left. Probably I am wrong, but I don't know why :-)
Calling C code should still be possible, but it will involve checking for free space on the stack and preallocating as required. A possible optimization is to use one "C stack" per thread (with the assumption that there's no re-entry into our code from C code). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Oct 14 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter:
 
 You can't call C code directly anymore.
I don't know/understand why. Probably I just don't know how a segmented stack is implemented/structured. If the stack is growable, I presume there's some free space anyway at the end of it. So you are not forced to test for the stack length, so you may call a C function and hope it will not blow the free stack left. Probably I am wrong, but I don't know why :-)
Segmented stacks add a test in *every* function's prolog to test and see if the stack is exhausted, and if so, switch to a new stack. Exactly zero of C code does this. So if you're near the end of your stack segment, and you call a C function, boom.
Oct 14 2010
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 15 Oct 2010 03:41:50 +0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 So if you're near the end of your stack segment, and you call a C  
 function, boom.
I've heard that happens in D, too. You can still call C functions at your peril, and no people complained so far.
Oct 14 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Denis Koroskin:

 I've heard that happens in D, too. You can still call C functions at your  
 peril, and no people complained so far.
You have stack overflows with DMD too, but I think in a segmented stack the segments are smaller than an average D stack, so it's more probable to go past one of them (I presume segmented stacks are like a deck data structure, this means a dynamic array of pointers to fixed-sized memory blocks). Currently the main D compiler has nearly nothing to help against stack overflows, no stack guards, no static tools to compute the max stack used by a function/program, etc. I think LDC has a bit of optional stack guards. Bye, bearophile
Oct 14 2010
prev sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 15 Oct 2010 02:45:33 +0300, Denis Koroskin <2korden gmail.com>  
wrote:

 I've heard that happens in D, too. You can still call C functions at  
 your peril, and no people complained so far.
I believe D (DMD, at least) is in the exact same situation as C is as far as the stack goes. There is some code in the garbage collector to get the bounds of each thread's stack, but that's it, I think. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Oct 14 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Fri, 15 Oct 2010 02:45:33 +0300, Denis Koroskin <2korden gmail.com> 
 wrote:
 
 I've heard that happens in D, too. You can still call C functions at 
 your peril, and no people complained so far.
I believe D (DMD, at least) is in the exact same situation as C is as far as the stack goes.
The point of a segmented stack is to allocate stack in small bits, meaning you'll be highly likely to run out of stack calling functions that do not check for stack overflow. The usual C method, which is followed by D, is to estimate the max stack used beforehand.
Oct 14 2010
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:i97utq$d7e$1 digitalmars.com...
 - In my D programs I sometimes use pointers, but pointer arithmetic is 
 indeed uncommon.
If you're actually doing systems-level or high-performance work, it can be essential in certain cases depending on how good the optimizer is. Loops like this are fairly typical (using 'for' instead of 'foreach'/'map'/etc for clarity): T[] myArray = ...; for(int i=0; i<max; i++) { myArray[i] // <- do something with that } If the compiler isn't smart enough to turn that into this: T[] myArray = ...; auto ptr = myArray.ptr; auto end = myArray.ptr + max; for(auto ptr = myArray.ptr; ptr<end; ptr++) { *myArray // <- do something with that } Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days. And keep in mind, of course, real-world examples can be much more complex than that, so even if the compiler can handle trivial cases like this (I have no idea if it can, although using 'foreach' would probably make it easier - in some cases), it might not work for other cases. So unless the optimizer was known to be that good even in complex cases, I wouldn't want to be without pointer arithmetic. It's not needed often, but when it is needed it's indispensable (and still results in much more readable/portable code then delving down to asm). Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.
 - Turning x++; into statements seems harsh, but indeed it solves some 
 problems. In practice in my D programs the ++ is often used as a 
 statement, to avoid bugs.
I've long been of the opinion that should just be a statement. All it ever does as an expression, if anything, is obfuscate code. I've never once seen a case where it clarified anything.
 - Segmented stack: allows to avoid some stack overflows at the price of a 
 bit of delay at calling functions.
Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)
Oct 14 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:i97utq$d7e$1 digitalmars.com...
 - In my D programs I sometimes use pointers, but pointer arithmetic is 
 indeed uncommon.
If you're actually doing systems-level or high-performance work, it can be essential in certain cases depending on how good the optimizer is.
It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.
Oct 14 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Walter Bright wrote:
 It's hard to see how to implement, say, a storage allocator with no 
 pointer arithmetic.
Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 14 2010
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
Easy, just implement a small assembly funtion.

Not everything has to be in the language.

"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i984lr$odj$3 digitalmars.com...
 Walter Bright wrote:
 It's hard to see how to implement, say, a storage allocator with no 
 pointer arithmetic.
Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 14 2010
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Yeah, and I've done that. It doesn't work out as well as you say, nor is it
that 
easy. Problems:

1. You have to reimplement it for every platform and every memory model.
2. For some systems, like Windows, there are a wide variety of assemblers. They 
all use slightly different syntax. Distributing an asm file means an *unending* 
stream of complaints from people who don't have an assembler or have a
different 
one than yours.
3. Getting all the boilerplate segment declarations right is a nuisance.
4. Name mangling.
5. Next your asm code all breaks when you want to recompile your app as a
shared 
library.
6. Asm files are a nightmare on OSX.

A language should be there to solve problems, not create them :-)

Paulo Pinto wrote:
 Easy, just implement a small assembly funtion.
 
 Not everything has to be in the language.
 
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i984lr$odj$3 digitalmars.com...
 Walter Bright wrote:
 It's hard to see how to implement, say, a storage allocator with no 
 pointer arithmetic.
Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
Still most modern languages are moving away from inline assembly.

Even Microsoft has dropped inline assembly support for the 64bit version of 
Visual C++, pointing
developers to MASM.

People will always complain no matter what. Just use the official assembler 
for the target platform.

Personally the last time I used inline assembly I was still target MS-DOS, 
long time ago and actually
it is one of the features I don't like in D.

--
Paulo

"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i98ub5$2bk7$1 digitalmars.com...
 Yeah, and I've done that. It doesn't work out as well as you say, nor is 
 it that easy. Problems:

 1. You have to reimplement it for every platform and every memory model.
 2. For some systems, like Windows, there are a wide variety of assemblers. 
 They all use slightly different syntax. Distributing an asm file means an 
 *unending* stream of complaints from people who don't have an assembler or 
 have a different one than yours.
 3. Getting all the boilerplate segment declarations right is a nuisance.
 4. Name mangling.
 5. Next your asm code all breaks when you want to recompile your app as a 
 shared library.
 6. Asm files are a nightmare on OSX.

 A language should be there to solve problems, not create them :-)

 Paulo Pinto wrote:
 Easy, just implement a small assembly funtion.

 Not everything has to be in the language.

 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i984lr$odj$3 digitalmars.com...
 Walter Bright wrote:
 It's hard to see how to implement, say, a storage allocator with no 
 pointer arithmetic.
Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
next sibling parent so <so so.do> writes:
Then don't use that feature, what is wrong with having a feature you don't  
use?

 Personally the last time I used inline assembly I was still target  
 MS-DOS,
 long time ago and actually
 it is one of the features I don't like in D.

 --
 Paulo

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i98ub5$2bk7$1 digitalmars.com...
 Yeah, and I've done that. It doesn't work out as well as you say, nor is
 it that easy. Problems:

 1. You have to reimplement it for every platform and every memory model.
 2. For some systems, like Windows, there are a wide variety of  
 assemblers.
 They all use slightly different syntax. Distributing an asm file means  
 an
 *unending* stream of complaints from people who don't have an assembler  
 or
 have a different one than yours.
 3. Getting all the boilerplate segment declarations right is a nuisance.
 4. Name mangling.
 5. Next your asm code all breaks when you want to recompile your app as  
 a
 shared library.
 6. Asm files are a nightmare on OSX.

 A language should be there to solve problems, not create them :-)

 Paulo Pinto wrote:
 Easy, just implement a small assembly funtion.

 Not everything has to be in the language.

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:i984lr$odj$3 digitalmars.com...
 Walter Bright wrote:
 It's hard to see how to implement, say, a storage allocator with no
 pointer arithmetic.
Here's another one. Try implementing va_arg without pointer arithmetic.
-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 15 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Paulo Pinto:

 Still most modern languages are moving away from inline assembly.
Inline assembly is good to learn and teach assembly programming too :-) Today a good system language needs to be designed to minimize the need of inline asm (see D vector ops), but it's a good thing to have as fall-back. I'd like the asm expressions & the pragma(allow_inline), of ldc. Bye, bearophile
Oct 15 2010
parent "Paulo Pinto" <pjmlp progtools.org> writes:
And to be abused as well.

I still remember having seen a C++ program in the MS-DOS days, where the 
only C++ feature
was main() and the other function names. All function bodys were inline 
assembly.

The developer had used the C++ compiler as a poor man's assembler.

I would rather not see such type of code in D.

Not to mention that it makes portability even worse. So besides having to 
have several #ifdefs for
different OS, you also need to have for different processor architectures.

Is D inline assembly supporting all x86 instruction set? What processors 
besides x86 are supported?

If I have to drop out to a real assembler for certain opcodes, then the gain 
of inline assembly is anyway
lost.

--
Paulo

"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:i99d48$9mj$1 digitalmars.com...
 Paulo Pinto:

 Still most modern languages are moving away from inline assembly.
Inline assembly is good to learn and teach assembly programming too :-) Today a good system language needs to be designed to minimize the need of inline asm (see D vector ops), but it's a good thing to have as fall-back. I'd like the asm expressions & the pragma(allow_inline), of ldc. Bye, bearophile
Oct 15 2010
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Sounds like violent agreement to me.

Andrei

On 10/15/10 4:17 CDT, Paulo Pinto wrote:
 Still most modern languages are moving away from inline assembly.

 Even Microsoft has dropped inline assembly support for the 64bit version of
 Visual C++, pointing
 developers to MASM.

 People will always complain no matter what. Just use the official assembler
 for the target platform.

 Personally the last time I used inline assembly I was still target MS-DOS,
 long time ago and actually
 it is one of the features I don't like in D.

 --
 Paulo

 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:i98ub5$2bk7$1 digitalmars.com...
 Yeah, and I've done that. It doesn't work out as well as you say, nor is
 it that easy. Problems:

 1. You have to reimplement it for every platform and every memory model.
 2. For some systems, like Windows, there are a wide variety of assemblers.
 They all use slightly different syntax. Distributing an asm file means an
 *unending* stream of complaints from people who don't have an assembler or
 have a different one than yours.
 3. Getting all the boilerplate segment declarations right is a nuisance.
 4. Name mangling.
 5. Next your asm code all breaks when you want to recompile your app as a
 shared library.
 6. Asm files are a nightmare on OSX.

 A language should be there to solve problems, not create them :-)

 Paulo Pinto wrote:
 Easy, just implement a small assembly funtion.

 Not everything has to be in the language.

 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:i984lr$odj$3 digitalmars.com...
 Walter Bright wrote:
 It's hard to see how to implement, say, a storage allocator with no
 pointer arithmetic.
Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Paulo Pinto wrote:
 Still most modern languages are moving away from inline assembly.
It's a pain to write an inline assembler and figure out how to integrate it in with the rest of the compiler. I can see why compiler writers don't want to do it, and look for reasons not to. Most modern languages do not even generate code - they target the JVM or CLI.
 Even Microsoft has dropped inline assembly support for the 64bit version of 
 Visual C++, pointing
 developers to MASM.
I'd be curious as to their rationale.
 People will always complain no matter what. Just use the official assembler 
 for the target platform.
Microsoft MASM has about 30 different incarnations, all accepting different syntax. It's a *constant* source of grief for customer support.
 Personally the last time I used inline assembly I was still target MS-DOS, 
 long time ago and actually
 it is one of the features I don't like in D.
I'd be forced to write a standalone assembler if D didn't have inline assembler. In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.
Oct 15 2010
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
Not sure about their rationale, but here is a Visual C++ team blog entry 
about it:

http://blogs.msdn.com/b/vcblog/archive/2007/10/18/new-intrinsic-support-in-visual-studio-2008.aspx


"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i9a2t3$26pm$1 digitalmars.com...
 Paulo Pinto wrote:
 Still most modern languages are moving away from inline assembly.
It's a pain to write an inline assembler and figure out how to integrate it in with the rest of the compiler. I can see why compiler writers don't want to do it, and look for reasons not to. Most modern languages do not even generate code - they target the JVM or CLI.
 Even Microsoft has dropped inline assembly support for the 64bit version 
 of Visual C++, pointing
 developers to MASM.
I'd be curious as to their rationale.
 People will always complain no matter what. Just use the official 
 assembler for the target platform.
Microsoft MASM has about 30 different incarnations, all accepting different syntax. It's a *constant* source of grief for customer support.
 Personally the last time I used inline assembly I was still target 
 MS-DOS, long time ago and actually
 it is one of the features I don't like in D.
I'd be forced to write a standalone assembler if D didn't have inline assembler. In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.
Oct 16 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Paulo Pinto wrote:
 Not sure about their rationale, but here is a Visual C++ team blog entry 
 about it:
 
 http://blogs.msdn.com/b/vcblog/archive/2007/10/18/new-intrinsic-support-in-visual-studio-2008.aspx
Thanks for the link. The user comments on it are less than kind about VC++ dropping inline asm.
Oct 16 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Nick Sabalausky:

 Then you're wasting cycles every iteration (by doing an extra addition and 
 maybe an extra shift or even multiplication depending on T: Ie, 
 (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common 
 inner-loop optimization back in my C days.
With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
 Plus, I've never once done pointer arithmetic accidentally in D, so I don't 
 see any safety to be gained from not allowing it.
I think they are trying to design a safer language. Pointer arithmetic is well known to be error-prone. (I have never asked to remove pointer arithmetic from D).
 - Turning x++; into statements seems harsh, but indeed it solves some 
 problems. In practice in my D programs the ++ is often used as a 
 statement, to avoid bugs.
I've long been of the opinion that should just be a statement. All it ever does as an expression, if anything, is obfuscate code. I've never once seen a case where it clarified anything.
In some cases it shortens the code a bit, but the price to pay for such shortening is some possible bugs. I think/hope D will turn the sub-expression deterministic, so some expressions that contain ++ and function calls will be defined in D: Bye, bearophile
Oct 14 2010
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, October 14, 2010 16:49:58 bearophile wrote:
 Nick Sabalausky:
 Then you're wasting cycles every iteration (by doing an extra addition
 and maybe an extra shift or even multiplication depending on T: Ie,
 (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common
 inner-loop optimization back in my C days.
With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
 Plus, I've never once done pointer arithmetic accidentally in D, so I
 don't see any safety to be gained from not allowing it.
I think they are trying to design a safer language. Pointer arithmetic is well known to be error-prone. (I have never asked to remove pointer arithmetic from D).
There's nothing wrong with a language not having pointer arithmetic. It is an error-prone feature (hence why it's banned in SafeD) and many languages don't need it and don't have it. However, it's hard to see how a language can claim to be a systems programming language and not allow pointer arithmetic. I really need to sit down and take a good look at Go one of these days, but the more I hear about it, the less it looks like a systems programming language. They also clearly have a _very_ different approach from D, and I'd expect that the types of people who like Go wouldn't like D and vice-versa. I still need to take a good look at one of these days though. - Jonathan M Davis
Oct 14 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Nick Sabalausky:
 
 Then you're wasting cycles every iteration (by doing an extra addition and 
 maybe an extra shift or even multiplication depending on T: Ie, 
 (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common 
 inner-loop optimization back in my C days.
With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.
 Plus, I've never once done pointer arithmetic accidentally in D, so I don't 
 see any safety to be gained from not allowing it.
I think they are trying to design a safer language. Pointer arithmetic is well known to be error-prone. (I have never asked to remove pointer arithmetic from D).
D has pointers that you cannot do arithmetic on - called references. The semantics are carefully designed so a function cannot return a reference to a local, this is so that such locals will not have to be put onto the garbage collected heap. Hence, references are usable in safe mode. Class references are also "pointers" that cannot have arithmetic on them.
Oct 14 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i98frv$1dm2$1 digitalmars.com...
 bearophile wrote:
 Nick Sabalausky:

 Then you're wasting cycles every iteration (by doing an extra addition 
 and maybe an extra shift or even multiplication depending on T: Ie, 
 (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common 
 inner-loop optimization back in my C days.
With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.
Guess it's been way too long since I've touched x86 asm and my memory's warped :/ OTOH, not all platforms are x86 (but maybe that's still a common thing on other architectures).
Oct 14 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:i98frv$1dm2$1 digitalmars.com...
 bearophile wrote:
 Nick Sabalausky:

 Then you're wasting cycles every iteration (by doing an extra addition 
 and maybe an extra shift or even multiplication depending on T: Ie, 
 (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common 
 inner-loop optimization back in my C days.
With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.
Guess it's been way too long since I've touched x86 asm and my memory's warped :/ OTOH, not all platforms are x86 (but maybe that's still a common thing on other architectures).
Those hardware addressing modes are not there for the 16 bit x86, and dmd's optimizer has a lot of code to rewrite loops to avoid needing them (called loop induction variables). These rewrites speed things up on 16 bit code, but slow things down for 32 bit code, and so are disabled for 32 bit code. Write a simple loop, try it and see.
Oct 14 2010
prev sibling next sibling parent "JimBob" <jim bob.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:i98frv$1dm2$1 digitalmars.com...
 bearophile wrote:
 Nick Sabalausky:

 Then you're wasting cycles every iteration (by doing an extra addition 
 and maybe an extra shift or even multiplication depending on T: Ie, 
 (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common 
 inner-loop optimization back in my C days.
With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.
As long as T.sizeof is either 1, 2, 4, or 8 bytes.
Oct 15 2010
prev sibling parent reply Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 05:55 AM, Walter Bright wrote:

 D has pointers that you cannot do arithmetic on - called references. The
 semantics are carefully designed so a function cannot return a reference
 to a local, this is so that such locals will not have to be put onto the
 garbage collected heap. Hence, references are usable in safe mode.
I think the above statement needs clarification. Honestly, I don't understand how references to non-class objects are supposed to work in SafeD. Consider: struct S { int x; } static S s; ref S foo() { return s; } void bar() { foo().x = 1; assert(s.x == 1); // ok, s updated auto s2 = foo(); s2.x = 2; assert(s.x == 2); // not ok, we need to use a pointer as below auto s3 = &foo(); s3.x = 3; assert(s.x == 3); // ok, s updated } Since pointers are not allowed in SafeD, any non-trivial operations on a referenced object are extremely awkward because you have to pile all of them around the call returning the reference. For example, if I want to update s and then pass it by reference to another function: void baz(ref S s) { } void bar() { baz(foo(s).x = 1); // awkward } Of course, we can use tricks like a trusted Ref struct wrapping a pointer to the referenced object. But I don't know how such a struct can prevent one from returning locals: struct Ref(T) { T* p; this(ref T v) { p = &v; } ref T getRef() { return *p; } alias getRef this; } ref Ref!T byref(T)(ref T v) { return Ref!T(v); } ref S foo() { S s; return byref(s); // local successfully escaped } Please comment.
Oct 15 2010
next sibling parent Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 11:49 AM, Max Samukha wrote:

 ref Ref!T byref(T)(ref T v)
 {
      return Ref!T(v);
 }
should be Ref!T byref(T)(ref T v) { return Ref!T(v); }
Oct 15 2010
prev sibling next sibling parent Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 11:49 AM, Max Samukha wrote:

...and

 void bar()
 {
     baz(foo(s).x = 1); // awkward
 }
should be void bar() { baz(foo().x = 1); // awkward }
Oct 15 2010
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Max Samukha wrote:
 Please comment.
The example relies on taking the address of a ref in a safe function. To close this hole, it appears that should be disallowed.
Oct 15 2010
parent reply Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 12:32 PM, Walter Bright wrote:
 The example relies on taking the address of a ref in a safe function. To
 close this hole, it appears that should be disallowed.
And disallowing it makes references not so useful. What I like about Go's solution is that it is consistent with closures. When a group of locals escape with a closure (that is when the address of the local function using that group is taken) they are copied to heap. When a local escape by ref (that is when the address of the local is taken), it is also copied to heap. What I don't like about Go's closures/addresses-to-locals and D's delegates is that stuff is heap-allocated implicitly and by default. Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken. That reminds me of the argument about "new" being necessary for classes because it makes the heap allocation explicit. It is difficult to say good-bye to "new" but at the same time we are somehow happy with implicitly allocated closures.
Oct 15 2010
next sibling parent reply Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 01:49 PM, Max Samukha wrote:
 Go has even gone (sorry) as far as allocating copies *every* time the
 address of a local is taken.
I may be mistaken on that.
Oct 15 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Max Samukha wrote:
 On 10/15/2010 01:49 PM, Max Samukha wrote:
 Go has even gone (sorry) as far as allocating copies *every* time the
 address of a local is taken.
I may be mistaken on that.
It would require a rather sophisticated compiler to be able to not do that.
Oct 15 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Max Samukha wrote:
 On 10/15/2010 12:32 PM, Walter Bright wrote:
 The example relies on taking the address of a ref in a safe function. To
 close this hole, it appears that should be disallowed.
And disallowing it makes references not so useful.
I don't see why. They're useful enough.
 What I like about Go's solution is that it is consistent with closures. 
 When a group of locals escape with a closure (that is when the address 
 of the local function using that group is taken) they are copied to 
 heap. When a local escape by ref (that is when the address of the local 
 is taken), it is also copied to heap.
I understand how it works. There is a downside to it, though. In D2, closures get copied to the GC heap if there is a possibility of an escaping reference. A lot of people complain about this being unexpected hidden overhead. The trouble with "copy any ref'd local to the heap" automatically happening is the biggest advantage of passing by ref (efficiency) is automatically lost. Even if it does not escape, it is copied to the heap anyway, as you point out below.
 What I don't like about Go's closures/addresses-to-locals and D's 
 delegates is that stuff is heap-allocated implicitly and by default. Go 
 has even gone (sorry) as far as allocating copies *every* time the 
 address of a local is taken.
Exactly. (Though D won't do the copy if it can prove that the delegate does not escape.)
 That reminds me of the argument about "new" being necessary for classes 
 because it makes the heap allocation explicit. It is difficult to say 
 good-bye to "new" but at the same time we are somehow happy with 
 implicitly allocated closures.
I think that implicitly allocated closures are a lot less common than passing a local by reference.
Oct 15 2010
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 15 Oct 2010 21:34:39 +0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Max Samukha wrote:
 On 10/15/2010 12:32 PM, Walter Bright wrote:
 The example relies on taking the address of a ref in a safe function.  
 To
 close this hole, it appears that should be disallowed.
And disallowing it makes references not so useful.
I don't see why. They're useful enough.
 What I like about Go's solution is that it is consistent with closures.  
 When a group of locals escape with a closure (that is when the address  
 of the local function using that group is taken) they are copied to  
 heap. When a local escape by ref (that is when the address of the local  
 is taken), it is also copied to heap.
I understand how it works. There is a downside to it, though. In D2, closures get copied to the GC heap if there is a possibility of an escaping reference. A lot of people complain about this being unexpected hidden overhead. The trouble with "copy any ref'd local to the heap" automatically happening is the biggest advantage of passing by ref (efficiency) is automatically lost. Even if it does not escape, it is copied to the heap anyway, as you point out below.
 What I don't like about Go's closures/addresses-to-locals and D's  
 delegates is that stuff is heap-allocated implicitly and by default. Go  
 has even gone (sorry) as far as allocating copies *every* time the  
 address of a local is taken.
Exactly. (Though D won't do the copy if it can prove that the delegate does not escape.)
 That reminds me of the argument about "new" being necessary for classes  
 because it makes the heap allocation explicit. It is difficult to say  
 good-bye to "new" but at the same time we are somehow happy with  
 implicitly allocated closures.
I think that implicitly allocated closures are a lot less common than passing a local by reference.
IIRC there was some keyword (is that static?) that forces a closure NOT to allocate on heap. I think I'll add an optional parameter that lists all the heap-allocated closures to ddmd (similar to how -vtls works).
Oct 15 2010
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Denis Koroskin (2korden gmail.com)'s article
 IIRC there was some keyword (is that static?) that forces a closure NOT to
 allocate on heap.
You're thinking of scope, and it works but it's a huge hack. When &someNestedFunction is evaluated in the context of a function call and the parameter is scope, there is no heap allocation. I've been meaning to dump a function into std.typecons, called noHeap or something, that just takes a scope delegate and returns it, as a way to bypass heap allocations for other cases of taking the address of a nested function. In general, I think this is a decent strategy: The implicit default behavior should be safe, easy to understand and easy to use even if it hurts performance. Optimizations should be carried out by the compiler when it can prove they won't affect code semantics or by the programmer when he/she can prove they're necessary. I wouldn't mind having &someLocal heap allocate like closures do (and be allowed in SafeD), as long as there's an easy way to explicitly prevent this. For example, we could use the scope trick like with closures, and have a scopedAddress function in std.typecons that lets you unsafely take the address of a stack variable.
 I think I'll add an optional parameter that lists all the heap-allocated
 closures to ddmd (similar to how -vtls works).
Vote++. This would be useful after you've identified some code as the bottleneck, to figure out why it's so much slower than it should be.
Oct 15 2010
prev sibling parent Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 08:34 PM, Walter Bright wrote:
 Max Samukha wrote:
 I don't see why. They're useful enough.
I might have exaggerated. But, for example, this use case: struct S { int x, y, z; } ref S foo(); void bar(ref S s); void baz() { auto s = &foo(); s.x = 1; s.y = 2; s.z = 3; bar(*s); } will not be easy. One will have to use tricks like that unsafe Ref struct or to move the code accessing the referenced object to another function. Pretty awkward.
 What I like about Go's solution is that it is consistent with
 closures. When a group of locals escape with a closure (that is when
 the address of the local function using that group is taken) they are
 copied to heap. When a local escape by ref (that is when the address
 of the local is taken), it is also copied to heap.
I understand how it works.
Of course, you do!
 There is a downside to it, though. In D2,
 closures get copied to the GC heap if there is a possibility of an
 escaping reference. A lot of people complain about this being unexpected
 hidden overhead.
Yeah, I don't like these hidden allocations either.
 The trouble with "copy any ref'd local to the heap" automatically
 happening is the biggest advantage of passing by ref (efficiency) is
 automatically lost. Even if it does not escape, it is copied to the heap
 anyway, as you point out below.
Indeed. Why create a stack-allocated local if it is going to be copied to the heap anyway?
 What I don't like about Go's closures/addresses-to-locals and D's
 delegates is that stuff is heap-allocated implicitly and by default.
 Go has even gone (sorry) as far as allocating copies *every* time the
 address of a local is taken.
Exactly. (Though D won't do the copy if it can prove that the delegate does not escape.)
 That reminds me of the argument about "new" being necessary for
 classes because it makes the heap allocation explicit. It is difficult
 to say good-bye to "new" but at the same time we are somehow happy
 with implicitly allocated closures.
I think that implicitly allocated closures are a lot less common than passing a local by reference.
I have no idea how often closures are used. I use them rarely but some people do crazy things with them.
Oct 16 2010
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2010-10-15 04:49:19 -0400, Max Samukha <spambox d-coding.com> said:

 static S s;
 ref S foo()
 {
      return s;
 }
Pointers are allowed in SafeD; pointer arithmetic is not. Taking the address of a static or global variable should be allowed. If this doesn't compile, you should report it a bug: static S s; S* foo() { return &s; } There's nothing unsafe about taking the address of a static or global variable since the pointer can never outlive the variable. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Oct 15 2010
parent Max Samukha <spambox d-coding.com> writes:
On 10/15/2010 02:14 PM, Michel Fortin wrote:
 On 2010-10-15 04:49:19 -0400, Max Samukha <spambox d-coding.com> said:

 static S s;
 ref S foo()
 {
 return s;
 }
Pointers are allowed in SafeD; pointer arithmetic is not. Taking the address of a static or global variable should be allowed. If this doesn't compile, you should report it a bug: static S s; S* foo() { return &s; } There's nothing unsafe about taking the address of a static or global variable since the pointer can never outlive the variable.
Ok, makes sense.
Oct 15 2010
prev sibling next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 15 Oct 2010 03:23:00 +0400, Nick Sabalausky <a a.a> wrote:

 "bearophile" <bearophileHUGS lycos.com> wrote in message
 news:i97utq$d7e$1 digitalmars.com...
 - In my D programs I sometimes use pointers, but pointer arithmetic is
 indeed uncommon.
If you're actually doing systems-level or high-performance work, it can be essential in certain cases depending on how good the optimizer is. Loops like this are fairly typical (using 'for' instead of 'foreach'/'map'/etc for clarity): T[] myArray = ...; for(int i=0; i<max; i++) { myArray[i] // <- do something with that } If the compiler isn't smart enough to turn that into this: T[] myArray = ...; auto ptr = myArray.ptr; auto end = myArray.ptr + max; for(auto ptr = myArray.ptr; ptr<end; ptr++) { *myArray // <- do something with that } Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days. And keep in mind, of course, real-world examples can be much more complex than that, so even if the compiler can handle trivial cases like this (I have no idea if it can, although using 'foreach' would probably make it easier - in some cases), it might not work for other cases. So unless the optimizer was known to be that good even in complex cases, I wouldn't want to be without pointer arithmetic. It's not needed often, but when it is needed it's indispensable (and still results in much more readable/portable code then delving down to asm). Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.
First, compiler doing pointer arithmetics != user doing pointer arithmetic. Second, I believe it's not about a danger or accidental pointer arithmetic usage, it's more about syntax (and ambiguities it introduces). For example, I once suggested using pointer syntax for classes too, and provided tons of arguments for that (ranging from solving tail-const issue to solving many language inconstancies that are in D between struct/class syntax and a lot more), plus a ton of additional functionality it optionally can provide if implemented. There was only one problem with that - pointer arithmetic syntax came into the way. E.g. Foo* foo = new Foo(); foo += 1; // is that operator overloading or pointer arithmetic? Foo foo = new Foo(); foo += 1; // compare to current version I still hope we deprecate pointer arithmetic and introduce another syntax for it for a next major D revision (i.e. D3)
 - Turning x++; into statements seems harsh, but indeed it solves some
 problems. In practice in my D programs the ++ is often used as a
 statement, to avoid bugs.
I've long been of the opinion that should just be a statement. All it ever does as an expression, if anything, is obfuscate code. I've never once seen a case where it clarified anything.
 - Segmented stack: allows to avoid some stack overflows at the price of  
 a
 bit of delay at calling functions.
Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)
Oct 14 2010
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Seems a bad idea to force the overhead of that, but it should definitely be 
 available as an option. Contrary to what Walter and Andrei seem to think, 
 32-bit systems are still very much alive and will be for quite awhile 
 longer. Especially when you remember that there are more computers out there 
 than just desktops and servers. (Ex: When is a phone ever going to need 
 64-bit? Eventually maybe, but certainly not anytime soon.)
16 bit processors died around 15 years after the introduction of 32 bit ones, even for embedded systems. If history repeats itself, figure 32 bit ones have about 5 years to go! As for what phones need, 15 years ago, who'd a thunk we'd be using phones today for internet browsing and playing feature movies? We were all simply enthralled by a phone that didn't have a cord attached to it and fit in your pocket. I came up with a really good virtual memory system for 16 bit code. The only problem was, by the time I figured it out, the people that needed it had moved on to protected mode with hardware vm. I feel D will be better off preparing for the coming 64 bit tsunami.
Oct 14 2010
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Walter Bright wrote:
 Nick Sabalausky wrote:
 Seems a bad idea to force the overhead of that, but it should
 definitely be available as an option. Contrary to what Walter and
 Andrei seem to think, 32-bit systems are still very much alive and
 will be for quite awhile longer. Especially when you remember that
 there are more computers out there than just desktops and servers.
 (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but
 certainly not anytime soon.)
=20 16 bit processors died around 15 years after the introduction of 32 bit=
 ones, even for embedded systems. If history repeats itself, figure 32
 bit ones have about 5 years to go!
=20
Funny thing is we still use some 8-bit microcontrollers in some situations :) But you're right, as soon as we need something more we go directly to 32 bits without stopping in the 16 bits square. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Oct 16 2010
parent Walter Bright <newshound2 digitalmars.com> writes:
Jérôme M. Berger wrote:
 Walter Bright wrote:
 Nick Sabalausky wrote:
 Seems a bad idea to force the overhead of that, but it should
 definitely be available as an option. Contrary to what Walter and
 Andrei seem to think, 32-bit systems are still very much alive and
 will be for quite awhile longer. Especially when you remember that
 there are more computers out there than just desktops and servers.
 (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but
 certainly not anytime soon.)
16 bit processors died around 15 years after the introduction of 32 bit ones, even for embedded systems. If history repeats itself, figure 32 bit ones have about 5 years to go!
Funny thing is we still use some 8-bit microcontrollers in some situations :) But you're right, as soon as we need something more we go directly to 32 bits without stopping in the 16 bits square.
I can tell 16 bits is dead as a doornail because the 16 bit tools biz has dried up to nothing.
Oct 16 2010
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 bearophile wrote:
 With D sometimes array-based code is faster than pointer-based. With LDC they
are usually equally efficient.
??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.
A little test program: import std.c.stdio: printf; size_t add1(size_t[] arr) { size_t sum = 0; for (size_t i = 0; i < arr.length; i++) sum += arr[i]; return sum; } size_t add2(size_t[] arr) { size_t sum = 0; size_t* p = arr.ptr; for (size_t i = 0; i < arr.length; i++) sum += p[i]; return sum; } size_t add3(size_t[] arr) { size_t sum = 0; size_t* p = arr.ptr; for (size_t i = 0; i < arr.length; i++) sum += *p++; return sum; } void main() { auto arr = new size_t[10_000_000]; foreach (size_t i, ref el; arr) el = i; printf("%u\n", add1(arr)); printf("%u\n", add2(arr)); printf("%u\n", add3(arr)); } --------------------- dmd 2.049 dmd -O -release -inline _D4test4add1FAkZk comdat push EAX xor ECX,ECX xor EDX,EDX push EBX cmp 0Ch[ESP],ECX je L28 mov 4[ESP],EDX mov EDX,010h[ESP] mov EBX,EDX mov EAX,0Ch[ESP] mov EDX,4[ESP] L1E: add ECX,[EDX*4][EBX] inc EDX cmp EDX,0Ch[ESP] jb L1E L28: pop EBX mov EAX,ECX pop ECX ret 8 _D4test4add2FAkZk comdat xor ECX,ECX xor EDX,EDX cmp 4[ESP],ECX je L18 LA: mov EAX,8[ESP] add ECX,[EDX*4][EAX] inc EDX cmp EDX,4[ESP] jb LA L18: mov EAX,ECX ret 8 _D4test4add3FAkZk comdat push EBX xor EDX,EDX xor ECX,ECX cmp 8[ESP],ECX mov EBX,0Ch[ESP] je L1D LF: mov EAX,EBX add EBX,4 inc ECX add EDX,[EAX] cmp ECX,8[ESP] jb LF L1D: pop EBX mov EAX,EDX ret 8 This has an influence on running time too. Bye, bearophile
Oct 15 2010
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/14/10 17:06 CDT, bearophile wrote:
 Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go":
 http://go.googlecode.com/hg/doc/ExpressivenessOfGo.pdf

 http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/
It's a good deck I think. I made a comment on reddit: http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/c12dkde Andrei
Oct 15 2010
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:i97utq$d7e$1 digitalmars.com...
 Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go":
 http://go.googlecode.com/hg/doc/ExpressivenessOfGo.pdf

 http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/

 This time I think I have understood most of the contents of the slides :-)


 Few interesting quotations:

 From Page 18:

 There are pointers but no pointer arithmetic
  - pointers are important to performance, pointer arithmetic not.
  - although it's OK to point inside a struct.
    - important to control layout of memory, avoid allocation
 Increment/decrement (p++) are statements, not expressions.
  - no confusion about order of evaluation
 Addresses last as long as they are needed.
  - take the address of a local variable, the implementation
    guarantees the memory survives while it's referenced.
 No implicit numerical conversions (float to int, etc.).
  - C's "usual arithmetic conversions" are a minefield.


 From page 19 and 20:

 Constants are "ideal numbers": no size or sign, hence no L
 or U or UL endings.

 Arithmetic with constants is high precision.  Only when
 assigned to a variable are they rounded or truncated to fit.

 A typed element in the expression sets the true type of the constant.


 From page 40:

 Goroutines have "segmented stacks":
   go f()
 starts f() executing concurrently on a new (small) stack.
 Stack grows and shrinks as needed.
 No programmer concern about stack size.
 No possibility for stack overflow.
 A couple of instructions of overhead on each function call, a
 huge improvement in simplicity and expressiveness.


 From page 46:

 The surprises you discover will be pleasant ones.
I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".
Oct 15 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Oct 15 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh... Andrei
Oct 15 2010
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
I see we should invite JokeExplainer to the forums!
Oct 15 2010
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 16/10/2010 00:15, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to
 gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
I see we should invite JokeExplainer to the forums!
I didn't get it... :/ (Nick's joke that is) -- Bruno Medeiros - Software Engineer
Nov 11 2010
parent reply Justin Johansson <no spam.com> writes:
On 11/11/10 22:56, Bruno Medeiros wrote:
 On 16/10/2010 00:15, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to
 gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
I see we should invite JokeExplainer to the forums!
I didn't get it... :/ (Nick's joke that is)
Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, Justin
Nov 11 2010
next sibling parent Justin Johansson <no spam.com> writes:
Addendum: Too be sure, I think I forgot to say "monads" sounds like 
"gonads".


 Hi Bruno,

 It is an English language word play on sound-alike words.

 Google on: "define: gonads"

 I think Nick was suggesting that someone/something gets some "balls"
 though "ovaries" might not be out of the question also. :-)

 Trusting this explains well in your native language.

 Regards,
 Justin
Nov 11 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 11/11/2010 12:10, Justin Johansson wrote:
 On 11/11/10 22:56, Bruno Medeiros wrote:
 On 16/10/2010 00:15, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to
 gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
I see we should invite JokeExplainer to the forums!
I didn't get it... :/ (Nick's joke that is)
Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, Justin
So Nick already had "gonads" in mind on that post, is that the case? -- Bruno Medeiros - Software Engineer
Nov 12 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Bruno Medeiros" <brunodomedeiros+spam com.gmail> wrote in message 
news:ibjd5l$2pv$1 digitalmars.com...
 On 11/11/2010 12:10, Justin Johansson wrote:
 On 11/11/10 22:56, Bruno Medeiros wrote:
 On 16/10/2010 00:15, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to
 gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
I see we should invite JokeExplainer to the forums!
I didn't get it... :/ (Nick's joke that is)
Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, Justin
So Nick already had "gonads" in mind on that post, is that the case?
My intended joke: Google Go has "coroutines" that it calls "goroutines" ( Because "go" + "coroutines" == "goroutines"). So I applied the same cutesy naming to "monads": "go" + "monads" == "gonads". And like Justin said, "gonads" also means "testicles" (and sometimes "ovaries"), so it's a pun and a rather odd name for a programming language feature. And somewhat ironically, it *would* take some serious gonads to name a language feature "gonads". (In English, saying that something requires balls/gonads/nuts/etc is a common slang way of saying it requires courage.)
Nov 23 2010
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 24/11/2010 01:37, Nick Sabalausky wrote:
 "Bruno Medeiros"<brunodomedeiros+spam com.gmail>  wrote in message
 news:ibjd5l$2pv$1 digitalmars.com...
 On 11/11/2010 12:10, Justin Johansson wrote:
 On 11/11/10 22:56, Bruno Medeiros wrote:
 On 16/10/2010 00:15, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to
 gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
I see we should invite JokeExplainer to the forums!
I didn't get it... :/ (Nick's joke that is)
Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, Justin
So Nick already had "gonads" in mind on that post, is that the case?
My intended joke: Google Go has "coroutines" that it calls "goroutines" ( Because "go" + "coroutines" == "goroutines"). So I applied the same cutesy naming to "monads": "go" + "monads" == "gonads". And like Justin said, "gonads" also means "testicles" (and sometimes "ovaries"), so it's a pun and a rather odd name for a programming language feature.
Ok, just checking, thanks for the clarification. (I'm sometimes a bit obtuse with things like this)
 (In English, saying that something requires
 balls/gonads/nuts/etc is a common slang way of saying it requires courage.)
Yeah, that I know already. :) -- Bruno Medeiros - Software Engineer
Nov 24 2010
prev sibling next sibling parent reply Justin Johansson <no spam.com> writes:
On 16/10/2010 9:34 AM, Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh... Andrei
Coincidentally, the official mailing list for Go PL is known as go-nuts! :-)
Oct 15 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/15/10 19:18 CDT, Justin Johansson wrote:
 On 16/10/2010 9:34 AM, Andrei Alexandrescu wrote:
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to
 gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh... Andrei
Coincidentally, the official mailing list for Go PL is known as go-nuts! :-)
Speaking of which, I gave one more read this morning to the reddit discussion (http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_express veness_of_go_pdf/). Boy, that didn't Go well. Andrei
Oct 16 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:i9akvf$1jlq$3 digitalmars.com...
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
Well it was a bit opaque. I was actually wondering if anyone would make the connection at all. :)
Oct 15 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/15/2010 11:26 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:i9akvf$1jlq$3 digitalmars.com...
 On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:
 On 10/15/10 16:25 CDT, Nick Sabalausky wrote:
 I just hope they get serious enough about functional programming to gain
 some monads to go along with their "goroutines".
They should call them "gonads". Andrei
Wait, that was your actual joke. Sighhhh...
Well it was a bit opaque. I was actually wondering if anyone would make the connection at all. :)
Much obliged to play Captain Obvious' role. Andrei
Oct 16 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
(Catching some older posts, I was busy)

Walter:

 In any case, inline assembler in D is a substantial productivity booster for
me 
 for anything that needs assembler. The inline assembler is also quite
ignorable, 
 if you don't like it.
I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost. Bye, bearophile
Oct 17 2010
next sibling parent Clark Gaebel <cg.wowus.cg gmail.com> writes:
Assembly is vital for almost all CPU-bound applications. Making it
inline just makes people's lives easier.

On 10/17/10 20:23, bearophile wrote:
 (Catching some older posts, I was busy)
 
 Walter:
 
 In any case, inline assembler in D is a substantial productivity booster for
me 
 for anything that needs assembler. The inline assembler is also quite
ignorable, 
 if you don't like it.
I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost. Bye, bearophile
-- Regards, -- Clark
Oct 17 2010
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter:
 
 In any case, inline assembler in D is a substantial productivity booster
 for me for anything that needs assembler. The inline assembler is also
 quite ignorable, if you don't like it.
I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost.
It's cost is a lot lower than writing the assembler in a separate asm file.
Oct 17 2010
prev sibling parent reply so <so so.do> writes:
Sorry maybe that is just me but that is not really an argument, if you  
want to build a rocket, you would hire capable people.

On Mon, 18 Oct 2010 03:23:21 +0300, bearophile <bearophileHUGS lycos.com>  
wrote:

 (Catching some older posts, I was busy)

 Walter:

 In any case, inline assembler in D is a substantial productivity  
 booster for me
 for anything that needs assembler. The inline assembler is also quite  
 ignorable,
 if you don't like it.
I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost. Bye, bearophile
-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Oct 18 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"so" <so so.do> wrote in message news:op.vkrh77fb7dtt59 so-pc...
 Sorry maybe that is just me but that is not really an argument, if you 
 want to build a rocket, you would hire capable people.
It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.
Oct 18 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Nick Sabalausky:

 It's amazing how many software houses/departments don't do that. But of 
 course, if they don't it's their own damn problem.
They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
Oct 18 2010
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 18.10.2010 22:49, schrieb bearophile:
 Nick Sabalausky:

 It's amazing how many software houses/departments don't do that. But of
 course, if they don't it's their own damn problem.
They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
This is one of the reasons why Java has become such a huge language in the IT world.
Oct 19 2010
parent reply div0 <div0 sourceforge.net> writes:
On 19/10/2010 21:24, Paulo Pinto wrote:
 Am 18.10.2010 22:49, schrieb bearophile:
 Nick Sabalausky:

 It's amazing how many software houses/departments don't do that. But of
 course, if they don't it's their own damn problem.
They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
This is one of the reasons why Java has become such a huge language in the IT world.
yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of us out of 18 will *ever* write template code. even for really trival stuff. In my xp, most c++ programmers just don't/can't get templates and I very much doubt that awkward syntax is the root cause. if you are one of those people why whould you chose a language with templates? they are off no dam use to you. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk
Oct 19 2010
parent reply retard <re tard.com.invalid> writes:
Tue, 19 Oct 2010 21:30:44 +0100, div0 wrote:

 On 19/10/2010 21:24, Paulo Pinto wrote:
 Am 18.10.2010 22:49, schrieb bearophile:
 Nick Sabalausky:

 It's amazing how many software houses/departments don't do that. But
 of course, if they don't it's their own damn problem.
They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
This is one of the reasons why Java has become such a huge language in the IT world.
yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of us out of 18 will *ever* write template code. even for really trival stuff. In my xp, most c++ programmers just don't/can't get templates and I very much doubt that awkward syntax is the root cause. if you are one of those people why whould you chose a language with templates? they are off no dam use to you.
Templates are used for at least two different purposes - to provide 1) (generic) parametric polymorphism and 2) (generative) metaprogramming code. Often the parametric version is enough (e.g. simple uses of collections). The first case is "optimized" in many modern language. For instance in Scala polymorphic collections are rather simple to use: // using namespace std; val l = List(1,2,3) // list<int> l(1,2,3); println("The contents are: ") // cout << "The contents are: "; println(l.mkString(" ")) // for (list<int>::iterator it = l.begin(); it != l.end(); it++) // cout << *it << " "; // cout << endl; println("Squared: ") // cout << "Squared: "; println(l.map(2 *).mkString(" ")) // for (list<int>::iterator it = l.begin(); it != l.end(); it++) // cout << (*it)*(*it) << " "; // cout << endl; Typical use cases don't require type annotations anywhere. The only problem with high level languages is that they may in some cases put more pressure to the optimizations in the compiler. What's funny is that the Scala developer here "implicitly" used terribly complex templates behind the scenes. And it's as simple as writing in some toy language. Overall, even the novice developers are so expensive that you can often replace the lost effiency with bigger hardware, which is cheaper than the extra development time would have been. This is many times the situation *now*, it might change when the large cloud servers run out of resources.
Oct 19 2010
next sibling parent retard <re tard.com.invalid> writes:
Tue, 19 Oct 2010 22:55:31 +0000, retard wrote:

 println(l.map(2 *).mkString(" "))
Made a mistake here, the correct code should be: println(l.map(a => a*a).mkString(" "))
Oct 19 2010
prev sibling parent type<erasure> <xx xx.xx> writes:
retard Wrote:

 Tue, 19 Oct 2010 21:30:44 +0100, div0 wrote:
 
 On 19/10/2010 21:24, Paulo Pinto wrote:
 Am 18.10.2010 22:49, schrieb bearophile:
 Nick Sabalausky:

 It's amazing how many software houses/departments don't do that. But
 of course, if they don't it's their own damn problem.
They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
This is one of the reasons why Java has become such a huge language in the IT world.
yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of us out of 18 will *ever* write template code. even for really trival stuff. In my xp, most c++ programmers just don't/can't get templates and I very much doubt that awkward syntax is the root cause. if you are one of those people why whould you chose a language with templates? they are off no dam use to you.
Templates are used for at least two different purposes - to provide 1) (generic) parametric polymorphism and 2) (generative) metaprogramming code. Often the parametric version is enough (e.g. simple uses of collections).
Complex C++/D collections are no simple generics. They have custom allocators and so forth. Study your home work, kid.
 
 The first case is "optimized" in many modern language. For instance in 
 Scala polymorphic collections are rather simple to use:
Ha, you don't know anything of Java VM now do you? Type erasure removes all efficiency and makes your stupid code run at least twice slower than real generics. On top of that comes VM start up time and other garbage collection costs. Your solution is screwed when put against real native C++/D meta programming. [snip ugly Scala & C++]
 Typical use cases don't require type annotations anywhere. The only 
 problem with high level languages is that they may in some cases put more 
 pressure to the optimizations in the compiler.
We want overly complex compilers with 10+ seconds run time? Hell no.
 What's funny is that the Scala developer here "implicitly" used terribly 
 complex templates behind the scenes. And it's as simple as writing in 
 some toy language.
Scala is just academic toy.
 Overall, even the novice developers are so expensive that you can often 
 replace the lost effiency with bigger hardware, which is cheaper than the 
 extra development time would have been. This is many times the situation 
 *now*, it might change when the large cloud servers run out of resources.
Slow code costs more in cloud services even today. You want cheap ? You write in native code.
Oct 19 2010