digitalmars.D - New slides about Go
- bearophile (204/204) Oct 14 2010 Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go...
- Walter Bright (6/16) Oct 14 2010 There's a huge cost to it, however. You can't call C code directly anymo...
- Nick Sabalausky (3/17) Oct 14 2010 Which is incredibly helpful on 32-bit systems.
- bearophile (5/6) Oct 14 2010 I don't know/understand why. Probably I just don't know how a segmented ...
- Vladimir Panteleev (9/17) Oct 14 2010 Calling C code should still be possible, but it will involve checking fo...
- Walter Bright (5/16) Oct 14 2010 Segmented stacks add a test in *every* function's prolog to test and see...
- Denis Koroskin (4/6) Oct 14 2010 I've heard that happens in D, too. You can still call C functions at you...
- bearophile (5/7) Oct 14 2010 You have stack overflows with DMD too, but I think in a segmented stack ...
- Vladimir Panteleev (8/10) Oct 14 2010 I believe D (DMD, at least) is in the exact same situation as C is as fa...
- Walter Bright (5/13) Oct 14 2010 The point of a segmented stack is to allocate stack in small bits, meani...
- Nick Sabalausky (42/49) Oct 14 2010 If you're actually doing systems-level or high-performance work, it can ...
- Walter Bright (3/10) Oct 14 2010 It's hard to see how to implement, say, a storage allocator with no poin...
- Walter Bright (2/4) Oct 14 2010 Here's another one. Try implementing va_arg without pointer arithmetic.
- Paulo Pinto (4/8) Oct 14 2010 Easy, just implement a small assembly funtion.
- Walter Bright (14/26) Oct 15 2010 Yeah, and I've done that. It doesn't work out as well as you say, nor is...
- Paulo Pinto (13/38) Oct 15 2010 Still most modern languages are moving away from inline assembly.
- so (4/45) Oct 15 2010 --
- bearophile (5/6) Oct 15 2010 Inline assembly is good to learn and teach assembly programming too :-)
- Paulo Pinto (19/27) Oct 15 2010 And to be abused as well.
- Andrei Alexandrescu (3/44) Oct 15 2010 Sounds like violent agreement to me.
- Walter Bright (12/21) Oct 15 2010 It's a pain to write an inline assembler and figure out how to integrate...
- Paulo Pinto (5/29) Oct 16 2010 Not sure about their rationale, but here is a Visual C++ team blog entry...
- Walter Bright (3/7) Oct 16 2010 Thanks for the link. The user comments on it are less than kind about VC...
- bearophile (8/21) Oct 14 2010 I think they are trying to design a safer language. Pointer arithmetic i...
- Jonathan M Davis (11/26) Oct 14 2010 There's nothing wrong with a language not having pointer arithmetic. It ...
- Walter Bright (8/21) Oct 14 2010 ??? This makes no sense.
- Nick Sabalausky (6/19) Oct 14 2010 Guess it's been way too long since I've touched x86 asm and my memory's
- Walter Bright (6/28) Oct 14 2010 Those hardware addressing modes are not there for the 16 bit x86, and dm...
- JimBob (3/16) Oct 15 2010 As long as T.sizeof is either 1, 2, 4, or 8 bytes.
- Max Samukha (55/59) Oct 15 2010 I think the above statement needs clarification. Honestly, I don't
- Max Samukha (6/10) Oct 15 2010 should be
- Max Samukha (7/11) Oct 15 2010 should be
- Walter Bright (3/4) Oct 15 2010 The example relies on taking the address of a ref in a safe function. To...
- Max Samukha (15/17) Oct 15 2010 And disallowing it makes references not so useful.
- Max Samukha (2/4) Oct 15 2010 I may be mistaken on that.
- Walter Bright (2/7) Oct 15 2010 It would require a rather sophisticated compiler to be able to not do th...
- Walter Bright (12/31) Oct 15 2010 I understand how it works. There is a downside to it, though. In D2, clo...
- Denis Koroskin (6/39) Oct 15 2010 IIRC there was some keyword (is that static?) that forces a closure NOT ...
- dsimcha (18/22) Oct 15 2010 You're thinking of scope, and it works but it's a huge hack. When
- Max Samukha (25/53) Oct 16 2010 I might have exaggerated. But, for example, this use case:
- Michel Fortin (15/20) Oct 15 2010 Pointers are allowed in SafeD; pointer arithmetic is not. Taking the
- Max Samukha (2/18) Oct 15 2010 Ok, makes sense.
- Denis Koroskin (15/75) Oct 14 2010 First, compiler doing pointer arithmetics != user doing pointer arithmet...
- Walter Bright (11/17) Oct 14 2010 16 bit processors died around 15 years after the introduction of 32 bit ...
- =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= (9/22) Oct 16 2010 Funny thing is we still use some 8-bit microcontrollers in some
- Walter Bright (3/19) Oct 16 2010 I can tell 16 bits is dead as a doornail because the 16 bit tools biz ha...
- bearophile (85/91) Oct 15 2010 A little test program:
- Andrei Alexandrescu (4/7) Oct 15 2010 It's a good deck I think. I made a comment on reddit:
- Nick Sabalausky (4/38) Oct 15 2010 I just hope they get serious enough about functional programming to gain...
- Andrei Alexandrescu (3/5) Oct 15 2010 They should call them "gonads".
- Andrei Alexandrescu (3/8) Oct 15 2010 Wait, that was your actual joke. Sighhhh...
- Walter Bright (2/12) Oct 15 2010 I see we should invite JokeExplainer to the forums!
- Bruno Medeiros (5/18) Nov 11 2010 I didn't get it... :/
- Justin Johansson (9/26) Nov 11 2010 Hi Bruno,
- Justin Johansson (2/10) Nov 11 2010
- Bruno Medeiros (4/32) Nov 12 2010 So Nick already had "gonads" in mind on that post, is that the case?
- Nick Sabalausky (11/47) Nov 23 2010 My intended joke:
- Bruno Medeiros (6/54) Nov 24 2010 Ok, just checking, thanks for the clarification. (I'm sometimes a bit
- Justin Johansson (3/13) Oct 15 2010 Coincidentally, the official mailing list for Go PL
- Andrei Alexandrescu (6/22) Oct 16 2010 Speaking of which, I gave one more read this morning to the reddit
- Nick Sabalausky (4/13) Oct 15 2010 Well it was a bit opaque. I was actually wondering if anyone would make ...
- Andrei Alexandrescu (3/18) Oct 16 2010 Much obliged to play Captain Obvious' role.
- bearophile (5/8) Oct 17 2010 I like the inline assembly feature of D. But language features aren't ig...
- Clark Gaebel (6/18) Oct 17 2010 Assembly is vital for almost all CPU-bound applications. Making it
- Walter Bright (2/13) Oct 17 2010 It's cost is a lot lower than writing the assembler in a separate asm fi...
- so (6/20) Oct 18 2010 Sorry maybe that is just me but that is not really an argument, if you
- Nick Sabalausky (3/5) Oct 18 2010 It's amazing how many software houses/departments don't do that. But of
- bearophile (4/6) Oct 18 2010 They want low-salary programmers, so they will avoid languages that may ...
- Paulo Pinto (3/9) Oct 19 2010 This is one of the reasons why Java has become such a huge language
- div0 (11/26) Oct 19 2010 yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of
Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go": http://go.googlecode.com/hg/doc/ExpressivenessOfGo.pdf http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/ This time I think I have understood most of the contents of the slides :-) Few interesting quotations: From Page 18: There are pointers but no pointer arithmetic - pointers are important to performance, pointer arithmetic not. - although it's OK to point inside a struct. - important to control layout of memory, avoid allocation Increment/decrement (p++) are statements, not expressions. - no confusion about order of evaluation Addresses last as long as they are needed. - take the address of a local variable, the implementation guarantees the memory survives while it's referenced. No implicit numerical conversions (float to int, etc.). - C's "usual arithmetic conversions" are a minefield. From page 19 and 20: Constants are "ideal numbers": no size or sign, hence no L or U or UL endings. Arithmetic with constants is high precision. Only when assigned to a variable are they rounded or truncated to fit. A typed element in the expression sets the true type of the constant. From page 40: Goroutines have "segmented stacks": go f() starts f() executing concurrently on a new (small) stack. Stack grows and shrinks as needed. No programmer concern about stack size. No possibility for stack overflow. A couple of instructions of overhead on each function call, a huge improvement in simplicity and expressiveness. From page 46: The surprises you discover will be pleasant ones. -------------------- Some comments: - In my D programs I sometimes use pointers, but pointer arithmetic is indeed uncommon. - Turning x++; into statements seems harsh, but indeed it solves some problems. In practice in my D programs the ++ is often used as a statement, to avoid bugs. - I think that "take the address of a local variable, the implementation guarantees the memory survives while it's referenced." means that it gets copied on the heap. - Constants management in Go: seems cute. - Segmented stack: allows to avoid some stack overflows at the price of a bit of delay at calling functions. - The comment from 46 refers to a language that is orthogonal, and I think it is probably very correct. It's one of the main advantages of an orthogonal design, you are free to create many combinations. -------------------- On the Go site there is a "playground", similar to what Ideone and Codepad sites offer for D2/D1. It contains some small programs, and you may modify them and compile almost arbitrary Go code. A little Go example in the playground shows closures and the comma/tuple syntax similar to Python one: package main // fib returns a function that returns // successive Fibonacci numbers. func fib() func() int { a, b := 0, 1 return func() int { a, b = b, a+b return b } } func main() { f := fib() // Function calls are evaluated left-to-right. println(f(), f(), f(), f(), f()) } Something similar in D2, D lacks a handy unpacking syntax, and I think currently it doesn't guarantee that functions get evaluated left-to-right: import std.stdio: writeln; int delegate() fib() { int a = 0; int b = 1; return { auto a_old = a; a = b; b = a_old + b; return b; }; } void main() { auto f = fib(); // function calls are not surely evaluated left-to-right writeln(f(), " ", f(), " ", f(), " ", f(), " ", f()); } Another example on the Go site, that shows the segmented stacks at work: // Peano integers are represented by a linked list // whose nodes contain no data (the nodes are the data). // See: http://en.wikipedia.org/wiki/Peano_axioms // This program demonstrates the power of Go's // segmented stacks when doing massively recursive // computations. package main // Number is a pointer to a Number type Number *Number // The arithmetic value of a Number is the count of // the nodes comprising the list. // (See the count function below.) // ------------------------------------- // Peano primitives func zero() *Number { return nil } func isZero(x *Number) bool { return x == nil } func add1(x *Number) *Number { e := new(Number) *e = x return e } func sub1(x *Number) *Number { return *x } func add(x, y *Number) *Number { if isZero(y) { return x } return add(add1(x), sub1(y)) } func mul(x, y *Number) *Number { if isZero(x) || isZero(y) { return zero() } return add(mul(x, sub1(y)), x) } func fact(n *Number) *Number { if isZero(n) { return add1(zero()) } return mul(fact(sub1(n)), n) } // ------------------------------------- // Helpers to generate/count Peano integers func gen(n int) *Number { if n > 0 { return add1(gen(n - 1)) } return zero() } func count(x *Number) int { if isZero(x) { return 0 } return count(sub1(x)) + 1 } // ------------------------------------- // Print i! for i in [0,9] func main() { for i := 0; i <= 9; i++ { f := count(fact(gen(i))) println(i, "! =", f) } } It's easy to translate it to D: import std.stdio: writeln; struct Number { Number* next; this(Number* ptr) { next = ptr; } } // ------------------------------------- // Peano primitives Number* zero() { return null; } bool isZero(Number* x) { return x == null; } Number* add1(Number* x) { return new Number(x); } Number* sub1(Number* x) { return x.next; } Number* add(Number* x, Number* y) { if (isZero(y)) return x; return add(add1(x), sub1(y)); } Number* mul(Number* x, Number* y) { if (isZero(x) || isZero(y)) return zero(); return add(mul(x, sub1(y)), x); } Number* fact(Number* n) { if (isZero(n)) return add1(zero()); return mul(fact(sub1(n)), n); } // ------------------------------------- // Helpers to generate/count Peano integers Number* gen(int n) { if (n <= 0) return zero(); return add1(gen(n - 1)); } int count(Number* x) { if (isZero(x)) { return 0; } return count(sub1(x)) + 1; } // ------------------------------------- void main() { foreach (i; 0 .. 11) { int f = count(fact(gen(i))); writeln(i, "! = ", f); } } But compiled normally on Windows leads to a stack overflow, you need to add a -L/STACK:10000000 Bye, bearophile
Oct 14 2010
bearophile wrote:From page 40: Goroutines have "segmented stacks": go f() starts f() executing concurrently on a new (small) stack. Stack grows and shrinks as needed. No programmer concern about stack size. No possibility for stack overflow. A couple of instructions of overhead on each function call, a huge improvement in simplicity and expressiveness.There's a huge cost to it, however. You can't call C code directly anymore. Anyhow, this problem simply goes away with 64 bits. You can allocate each thread gigabytes of address space, faulting it in as required, and still be able to have billions of threads. Segmented stacks would have been a great idea 10 years ago.
Oct 14 2010
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i97v5c$d1u$1 digitalmars.com...bearophile wrote:Which is incredibly helpful on 32-bit systems.From page 40: Goroutines have "segmented stacks": go f() starts f() executing concurrently on a new (small) stack. Stack grows and shrinks as needed. No programmer concern about stack size. No possibility for stack overflow. A couple of instructions of overhead on each function call, a huge improvement in simplicity and expressiveness.There's a huge cost to it, however. You can't call C code directly anymore. Anyhow, this problem simply goes away with 64 bits.
Oct 14 2010
Walter:You can't call C code directly anymore.I don't know/understand why. Probably I just don't know how a segmented stack is implemented/structured. If the stack is growable, I presume there's some free space anyway at the end of it. So you are not forced to test for the stack length, so you may call a C function and hope it will not blow the free stack left. Probably I am wrong, but I don't know why :-) Bye, bearophile
Oct 14 2010
On Fri, 15 Oct 2010 02:04:59 +0300, bearophile <bearophileHUGS lycos.com> wrote:Walter:Calling C code should still be possible, but it will involve checking for free space on the stack and preallocating as required. A possible optimization is to use one "C stack" per thread (with the assumption that there's no re-entry into our code from C code). -- Best regards, Vladimir mailto:vladimir thecybershadow.netYou can't call C code directly anymore.I don't know/understand why. Probably I just don't know how a segmented stack is implemented/structured. If the stack is growable, I presume there's some free space anyway at the end of it. So you are not forced to test for the stack length, so you may call a C function and hope it will not blow the free stack left. Probably I am wrong, but I don't know why :-)
Oct 14 2010
bearophile wrote:Walter:Segmented stacks add a test in *every* function's prolog to test and see if the stack is exhausted, and if so, switch to a new stack. Exactly zero of C code does this. So if you're near the end of your stack segment, and you call a C function, boom.You can't call C code directly anymore.I don't know/understand why. Probably I just don't know how a segmented stack is implemented/structured. If the stack is growable, I presume there's some free space anyway at the end of it. So you are not forced to test for the stack length, so you may call a C function and hope it will not blow the free stack left. Probably I am wrong, but I don't know why :-)
Oct 14 2010
On Fri, 15 Oct 2010 03:41:50 +0400, Walter Bright <newshound2 digitalmars.com> wrote:So if you're near the end of your stack segment, and you call a C function, boom.I've heard that happens in D, too. You can still call C functions at your peril, and no people complained so far.
Oct 14 2010
Denis Koroskin:I've heard that happens in D, too. You can still call C functions at your peril, and no people complained so far.You have stack overflows with DMD too, but I think in a segmented stack the segments are smaller than an average D stack, so it's more probable to go past one of them (I presume segmented stacks are like a deck data structure, this means a dynamic array of pointers to fixed-sized memory blocks). Currently the main D compiler has nearly nothing to help against stack overflows, no stack guards, no static tools to compute the max stack used by a function/program, etc. I think LDC has a bit of optional stack guards. Bye, bearophile
Oct 14 2010
On Fri, 15 Oct 2010 02:45:33 +0300, Denis Koroskin <2korden gmail.com> wrote:I've heard that happens in D, too. You can still call C functions at your peril, and no people complained so far.I believe D (DMD, at least) is in the exact same situation as C is as far as the stack goes. There is some code in the garbage collector to get the bounds of each thread's stack, but that's it, I think. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Oct 14 2010
Vladimir Panteleev wrote:On Fri, 15 Oct 2010 02:45:33 +0300, Denis Koroskin <2korden gmail.com> wrote:The point of a segmented stack is to allocate stack in small bits, meaning you'll be highly likely to run out of stack calling functions that do not check for stack overflow. The usual C method, which is followed by D, is to estimate the max stack used beforehand.I've heard that happens in D, too. You can still call C functions at your peril, and no people complained so far.I believe D (DMD, at least) is in the exact same situation as C is as far as the stack goes.
Oct 14 2010
"bearophile" <bearophileHUGS lycos.com> wrote in message news:i97utq$d7e$1 digitalmars.com...- In my D programs I sometimes use pointers, but pointer arithmetic is indeed uncommon.If you're actually doing systems-level or high-performance work, it can be essential in certain cases depending on how good the optimizer is. Loops like this are fairly typical (using 'for' instead of 'foreach'/'map'/etc for clarity): T[] myArray = ...; for(int i=0; i<max; i++) { myArray[i] // <- do something with that } If the compiler isn't smart enough to turn that into this: T[] myArray = ...; auto ptr = myArray.ptr; auto end = myArray.ptr + max; for(auto ptr = myArray.ptr; ptr<end; ptr++) { *myArray // <- do something with that } Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days. And keep in mind, of course, real-world examples can be much more complex than that, so even if the compiler can handle trivial cases like this (I have no idea if it can, although using 'foreach' would probably make it easier - in some cases), it might not work for other cases. So unless the optimizer was known to be that good even in complex cases, I wouldn't want to be without pointer arithmetic. It's not needed often, but when it is needed it's indispensable (and still results in much more readable/portable code then delving down to asm). Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.- Turning x++; into statements seems harsh, but indeed it solves some problems. In practice in my D programs the ++ is often used as a statement, to avoid bugs.I've long been of the opinion that should just be a statement. All it ever does as an expression, if anything, is obfuscate code. I've never once seen a case where it clarified anything.- Segmented stack: allows to avoid some stack overflows at the price of a bit of delay at calling functions.Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)
Oct 14 2010
Nick Sabalausky wrote:"bearophile" <bearophileHUGS lycos.com> wrote in message news:i97utq$d7e$1 digitalmars.com...It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.- In my D programs I sometimes use pointers, but pointer arithmetic is indeed uncommon.If you're actually doing systems-level or high-performance work, it can be essential in certain cases depending on how good the optimizer is.
Oct 14 2010
Walter Bright wrote:It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 14 2010
Easy, just implement a small assembly funtion. Not everything has to be in the language. "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i984lr$odj$3 digitalmars.com...Walter Bright wrote:It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 14 2010
Yeah, and I've done that. It doesn't work out as well as you say, nor is it that easy. Problems: 1. You have to reimplement it for every platform and every memory model. 2. For some systems, like Windows, there are a wide variety of assemblers. They all use slightly different syntax. Distributing an asm file means an *unending* stream of complaints from people who don't have an assembler or have a different one than yours. 3. Getting all the boilerplate segment declarations right is a nuisance. 4. Name mangling. 5. Next your asm code all breaks when you want to recompile your app as a shared library. 6. Asm files are a nightmare on OSX. A language should be there to solve problems, not create them :-) Paulo Pinto wrote:Easy, just implement a small assembly funtion. Not everything has to be in the language. "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i984lr$odj$3 digitalmars.com...Walter Bright wrote:It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
Still most modern languages are moving away from inline assembly. Even Microsoft has dropped inline assembly support for the 64bit version of Visual C++, pointing developers to MASM. People will always complain no matter what. Just use the official assembler for the target platform. Personally the last time I used inline assembly I was still target MS-DOS, long time ago and actually it is one of the features I don't like in D. -- Paulo "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i98ub5$2bk7$1 digitalmars.com...Yeah, and I've done that. It doesn't work out as well as you say, nor is it that easy. Problems: 1. You have to reimplement it for every platform and every memory model. 2. For some systems, like Windows, there are a wide variety of assemblers. They all use slightly different syntax. Distributing an asm file means an *unending* stream of complaints from people who don't have an assembler or have a different one than yours. 3. Getting all the boilerplate segment declarations right is a nuisance. 4. Name mangling. 5. Next your asm code all breaks when you want to recompile your app as a shared library. 6. Asm files are a nightmare on OSX. A language should be there to solve problems, not create them :-) Paulo Pinto wrote:Easy, just implement a small assembly funtion. Not everything has to be in the language. "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i984lr$odj$3 digitalmars.com...Walter Bright wrote:It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
Then don't use that feature, what is wrong with having a feature you don't use?Personally the last time I used inline assembly I was still target MS-DOS, long time ago and actually it is one of the features I don't like in D. -- Paulo "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i98ub5$2bk7$1 digitalmars.com...-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/Yeah, and I've done that. It doesn't work out as well as you say, nor is it that easy. Problems: 1. You have to reimplement it for every platform and every memory model. 2. For some systems, like Windows, there are a wide variety of assemblers. They all use slightly different syntax. Distributing an asm file means an *unending* stream of complaints from people who don't have an assembler or have a different one than yours. 3. Getting all the boilerplate segment declarations right is a nuisance. 4. Name mangling. 5. Next your asm code all breaks when you want to recompile your app as a shared library. 6. Asm files are a nightmare on OSX. A language should be there to solve problems, not create them :-) Paulo Pinto wrote:Easy, just implement a small assembly funtion. Not everything has to be in the language. "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i984lr$odj$3 digitalmars.com...Walter Bright wrote:It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
Paulo Pinto:Still most modern languages are moving away from inline assembly.Inline assembly is good to learn and teach assembly programming too :-) Today a good system language needs to be designed to minimize the need of inline asm (see D vector ops), but it's a good thing to have as fall-back. I'd like the asm expressions & the pragma(allow_inline), of ldc. Bye, bearophile
Oct 15 2010
And to be abused as well. I still remember having seen a C++ program in the MS-DOS days, where the only C++ feature was main() and the other function names. All function bodys were inline assembly. The developer had used the C++ compiler as a poor man's assembler. I would rather not see such type of code in D. Not to mention that it makes portability even worse. So besides having to have several #ifdefs for different OS, you also need to have for different processor architectures. Is D inline assembly supporting all x86 instruction set? What processors besides x86 are supported? If I have to drop out to a real assembler for certain opcodes, then the gain of inline assembly is anyway lost. -- Paulo "bearophile" <bearophileHUGS lycos.com> wrote in message news:i99d48$9mj$1 digitalmars.com...Paulo Pinto:Still most modern languages are moving away from inline assembly.Inline assembly is good to learn and teach assembly programming too :-) Today a good system language needs to be designed to minimize the need of inline asm (see D vector ops), but it's a good thing to have as fall-back. I'd like the asm expressions & the pragma(allow_inline), of ldc. Bye, bearophile
Oct 15 2010
Sounds like violent agreement to me. Andrei On 10/15/10 4:17 CDT, Paulo Pinto wrote:Still most modern languages are moving away from inline assembly. Even Microsoft has dropped inline assembly support for the 64bit version of Visual C++, pointing developers to MASM. People will always complain no matter what. Just use the official assembler for the target platform. Personally the last time I used inline assembly I was still target MS-DOS, long time ago and actually it is one of the features I don't like in D. -- Paulo "Walter Bright"<newshound2 digitalmars.com> wrote in message news:i98ub5$2bk7$1 digitalmars.com...Yeah, and I've done that. It doesn't work out as well as you say, nor is it that easy. Problems: 1. You have to reimplement it for every platform and every memory model. 2. For some systems, like Windows, there are a wide variety of assemblers. They all use slightly different syntax. Distributing an asm file means an *unending* stream of complaints from people who don't have an assembler or have a different one than yours. 3. Getting all the boilerplate segment declarations right is a nuisance. 4. Name mangling. 5. Next your asm code all breaks when you want to recompile your app as a shared library. 6. Asm files are a nightmare on OSX. A language should be there to solve problems, not create them :-) Paulo Pinto wrote:Easy, just implement a small assembly funtion. Not everything has to be in the language. "Walter Bright"<newshound2 digitalmars.com> wrote in message news:i984lr$odj$3 digitalmars.com...Walter Bright wrote:It's hard to see how to implement, say, a storage allocator with no pointer arithmetic.Here's another one. Try implementing va_arg without pointer arithmetic.
Oct 15 2010
Paulo Pinto wrote:Still most modern languages are moving away from inline assembly.It's a pain to write an inline assembler and figure out how to integrate it in with the rest of the compiler. I can see why compiler writers don't want to do it, and look for reasons not to. Most modern languages do not even generate code - they target the JVM or CLI.Even Microsoft has dropped inline assembly support for the 64bit version of Visual C++, pointing developers to MASM.I'd be curious as to their rationale.People will always complain no matter what. Just use the official assembler for the target platform.Microsoft MASM has about 30 different incarnations, all accepting different syntax. It's a *constant* source of grief for customer support.Personally the last time I used inline assembly I was still target MS-DOS, long time ago and actually it is one of the features I don't like in D.I'd be forced to write a standalone assembler if D didn't have inline assembler. In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.
Oct 15 2010
Not sure about their rationale, but here is a Visual C++ team blog entry about it: http://blogs.msdn.com/b/vcblog/archive/2007/10/18/new-intrinsic-support-in-visual-studio-2008.aspx "Walter Bright" <newshound2 digitalmars.com> wrote in message news:i9a2t3$26pm$1 digitalmars.com...Paulo Pinto wrote:Still most modern languages are moving away from inline assembly.It's a pain to write an inline assembler and figure out how to integrate it in with the rest of the compiler. I can see why compiler writers don't want to do it, and look for reasons not to. Most modern languages do not even generate code - they target the JVM or CLI.Even Microsoft has dropped inline assembly support for the 64bit version of Visual C++, pointing developers to MASM.I'd be curious as to their rationale.People will always complain no matter what. Just use the official assembler for the target platform.Microsoft MASM has about 30 different incarnations, all accepting different syntax. It's a *constant* source of grief for customer support.Personally the last time I used inline assembly I was still target MS-DOS, long time ago and actually it is one of the features I don't like in D.I'd be forced to write a standalone assembler if D didn't have inline assembler. In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.
Oct 16 2010
Paulo Pinto wrote:Not sure about their rationale, but here is a Visual C++ team blog entry about it: http://blogs.msdn.com/b/vcblog/archive/2007/10/18/new-intrinsic-support-in-visual-studio-2008.aspxThanks for the link. The user comments on it are less than kind about VC++ dropping inline asm.
Oct 16 2010
Nick Sabalausky:Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days.With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.I think they are trying to design a safer language. Pointer arithmetic is well known to be error-prone. (I have never asked to remove pointer arithmetic from D).In some cases it shortens the code a bit, but the price to pay for such shortening is some possible bugs. I think/hope D will turn the sub-expression deterministic, so some expressions that contain ++ and function calls will be defined in D: Bye, bearophile- Turning x++; into statements seems harsh, but indeed it solves some problems. In practice in my D programs the ++ is often used as a statement, to avoid bugs.I've long been of the opinion that should just be a statement. All it ever does as an expression, if anything, is obfuscate code. I've never once seen a case where it clarified anything.
Oct 14 2010
On Thursday, October 14, 2010 16:49:58 bearophile wrote:Nick Sabalausky:There's nothing wrong with a language not having pointer arithmetic. It is an error-prone feature (hence why it's banned in SafeD) and many languages don't need it and don't have it. However, it's hard to see how a language can claim to be a systems programming language and not allow pointer arithmetic. I really need to sit down and take a good look at Go one of these days, but the more I hear about it, the less it looks like a systems programming language. They also clearly have a _very_ different approach from D, and I'd expect that the types of people who like Go wouldn't like D and vice-versa. I still need to take a good look at one of these days though. - Jonathan M DavisThen you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days.With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.I think they are trying to design a safer language. Pointer arithmetic is well known to be error-prone. (I have never asked to remove pointer arithmetic from D).
Oct 14 2010
bearophile wrote:Nick Sabalausky:??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days.With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.D has pointers that you cannot do arithmetic on - called references. The semantics are carefully designed so a function cannot return a reference to a local, this is so that such locals will not have to be put onto the garbage collected heap. Hence, references are usable in safe mode. Class references are also "pointers" that cannot have arithmetic on them.Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.I think they are trying to design a safer language. Pointer arithmetic is well known to be error-prone. (I have never asked to remove pointer arithmetic from D).
Oct 14 2010
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i98frv$1dm2$1 digitalmars.com...bearophile wrote:Guess it's been way too long since I've touched x86 asm and my memory's warped :/ OTOH, not all platforms are x86 (but maybe that's still a common thing on other architectures).Nick Sabalausky:??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days.With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
Oct 14 2010
Nick Sabalausky wrote:"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i98frv$1dm2$1 digitalmars.com...Those hardware addressing modes are not there for the 16 bit x86, and dmd's optimizer has a lot of code to rewrite loops to avoid needing them (called loop induction variables). These rewrites speed things up on 16 bit code, but slow things down for 32 bit code, and so are disabled for 32 bit code. Write a simple loop, try it and see.bearophile wrote:Guess it's been way too long since I've touched x86 asm and my memory's warped :/ OTOH, not all platforms are x86 (but maybe that's still a common thing on other architectures).Nick Sabalausky:??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days.With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
Oct 14 2010
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i98frv$1dm2$1 digitalmars.com...bearophile wrote:As long as T.sizeof is either 1, 2, 4, or 8 bytes.Nick Sabalausky:??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days.With D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.
Oct 15 2010
On 10/15/2010 05:55 AM, Walter Bright wrote:D has pointers that you cannot do arithmetic on - called references. The semantics are carefully designed so a function cannot return a reference to a local, this is so that such locals will not have to be put onto the garbage collected heap. Hence, references are usable in safe mode.I think the above statement needs clarification. Honestly, I don't understand how references to non-class objects are supposed to work in SafeD. Consider: struct S { int x; } static S s; ref S foo() { return s; } void bar() { foo().x = 1; assert(s.x == 1); // ok, s updated auto s2 = foo(); s2.x = 2; assert(s.x == 2); // not ok, we need to use a pointer as below auto s3 = &foo(); s3.x = 3; assert(s.x == 3); // ok, s updated } Since pointers are not allowed in SafeD, any non-trivial operations on a referenced object are extremely awkward because you have to pile all of them around the call returning the reference. For example, if I want to update s and then pass it by reference to another function: void baz(ref S s) { } void bar() { baz(foo(s).x = 1); // awkward } Of course, we can use tricks like a trusted Ref struct wrapping a pointer to the referenced object. But I don't know how such a struct can prevent one from returning locals: struct Ref(T) { T* p; this(ref T v) { p = &v; } ref T getRef() { return *p; } alias getRef this; } ref Ref!T byref(T)(ref T v) { return Ref!T(v); } ref S foo() { S s; return byref(s); // local successfully escaped } Please comment.
Oct 15 2010
On 10/15/2010 11:49 AM, Max Samukha wrote:ref Ref!T byref(T)(ref T v) { return Ref!T(v); }should be Ref!T byref(T)(ref T v) { return Ref!T(v); }
Oct 15 2010
On 10/15/2010 11:49 AM, Max Samukha wrote: ...andvoid bar() { baz(foo(s).x = 1); // awkward }should be void bar() { baz(foo().x = 1); // awkward }
Oct 15 2010
Max Samukha wrote:Please comment.The example relies on taking the address of a ref in a safe function. To close this hole, it appears that should be disallowed.
Oct 15 2010
On 10/15/2010 12:32 PM, Walter Bright wrote:The example relies on taking the address of a ref in a safe function. To close this hole, it appears that should be disallowed.And disallowing it makes references not so useful. What I like about Go's solution is that it is consistent with closures. When a group of locals escape with a closure (that is when the address of the local function using that group is taken) they are copied to heap. When a local escape by ref (that is when the address of the local is taken), it is also copied to heap. What I don't like about Go's closures/addresses-to-locals and D's delegates is that stuff is heap-allocated implicitly and by default. Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken. That reminds me of the argument about "new" being necessary for classes because it makes the heap allocation explicit. It is difficult to say good-bye to "new" but at the same time we are somehow happy with implicitly allocated closures.
Oct 15 2010
On 10/15/2010 01:49 PM, Max Samukha wrote:Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken.I may be mistaken on that.
Oct 15 2010
Max Samukha wrote:On 10/15/2010 01:49 PM, Max Samukha wrote:It would require a rather sophisticated compiler to be able to not do that.Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken.I may be mistaken on that.
Oct 15 2010
Max Samukha wrote:On 10/15/2010 12:32 PM, Walter Bright wrote:I don't see why. They're useful enough.The example relies on taking the address of a ref in a safe function. To close this hole, it appears that should be disallowed.And disallowing it makes references not so useful.What I like about Go's solution is that it is consistent with closures. When a group of locals escape with a closure (that is when the address of the local function using that group is taken) they are copied to heap. When a local escape by ref (that is when the address of the local is taken), it is also copied to heap.I understand how it works. There is a downside to it, though. In D2, closures get copied to the GC heap if there is a possibility of an escaping reference. A lot of people complain about this being unexpected hidden overhead. The trouble with "copy any ref'd local to the heap" automatically happening is the biggest advantage of passing by ref (efficiency) is automatically lost. Even if it does not escape, it is copied to the heap anyway, as you point out below.What I don't like about Go's closures/addresses-to-locals and D's delegates is that stuff is heap-allocated implicitly and by default. Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken.Exactly. (Though D won't do the copy if it can prove that the delegate does not escape.)That reminds me of the argument about "new" being necessary for classes because it makes the heap allocation explicit. It is difficult to say good-bye to "new" but at the same time we are somehow happy with implicitly allocated closures.I think that implicitly allocated closures are a lot less common than passing a local by reference.
Oct 15 2010
On Fri, 15 Oct 2010 21:34:39 +0400, Walter Bright <newshound2 digitalmars.com> wrote:Max Samukha wrote:IIRC there was some keyword (is that static?) that forces a closure NOT to allocate on heap. I think I'll add an optional parameter that lists all the heap-allocated closures to ddmd (similar to how -vtls works).On 10/15/2010 12:32 PM, Walter Bright wrote:I don't see why. They're useful enough.The example relies on taking the address of a ref in a safe function. To close this hole, it appears that should be disallowed.And disallowing it makes references not so useful.What I like about Go's solution is that it is consistent with closures. When a group of locals escape with a closure (that is when the address of the local function using that group is taken) they are copied to heap. When a local escape by ref (that is when the address of the local is taken), it is also copied to heap.I understand how it works. There is a downside to it, though. In D2, closures get copied to the GC heap if there is a possibility of an escaping reference. A lot of people complain about this being unexpected hidden overhead. The trouble with "copy any ref'd local to the heap" automatically happening is the biggest advantage of passing by ref (efficiency) is automatically lost. Even if it does not escape, it is copied to the heap anyway, as you point out below.What I don't like about Go's closures/addresses-to-locals and D's delegates is that stuff is heap-allocated implicitly and by default. Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken.Exactly. (Though D won't do the copy if it can prove that the delegate does not escape.)That reminds me of the argument about "new" being necessary for classes because it makes the heap allocation explicit. It is difficult to say good-bye to "new" but at the same time we are somehow happy with implicitly allocated closures.I think that implicitly allocated closures are a lot less common than passing a local by reference.
Oct 15 2010
== Quote from Denis Koroskin (2korden gmail.com)'s articleIIRC there was some keyword (is that static?) that forces a closure NOT to allocate on heap.You're thinking of scope, and it works but it's a huge hack. When &someNestedFunction is evaluated in the context of a function call and the parameter is scope, there is no heap allocation. I've been meaning to dump a function into std.typecons, called noHeap or something, that just takes a scope delegate and returns it, as a way to bypass heap allocations for other cases of taking the address of a nested function. In general, I think this is a decent strategy: The implicit default behavior should be safe, easy to understand and easy to use even if it hurts performance. Optimizations should be carried out by the compiler when it can prove they won't affect code semantics or by the programmer when he/she can prove they're necessary. I wouldn't mind having &someLocal heap allocate like closures do (and be allowed in SafeD), as long as there's an easy way to explicitly prevent this. For example, we could use the scope trick like with closures, and have a scopedAddress function in std.typecons that lets you unsafely take the address of a stack variable.I think I'll add an optional parameter that lists all the heap-allocated closures to ddmd (similar to how -vtls works).Vote++. This would be useful after you've identified some code as the bottleneck, to figure out why it's so much slower than it should be.
Oct 15 2010
On 10/15/2010 08:34 PM, Walter Bright wrote:Max Samukha wrote:I don't see why. They're useful enough.I might have exaggerated. But, for example, this use case: struct S { int x, y, z; } ref S foo(); void bar(ref S s); void baz() { auto s = &foo(); s.x = 1; s.y = 2; s.z = 3; bar(*s); } will not be easy. One will have to use tricks like that unsafe Ref struct or to move the code accessing the referenced object to another function. Pretty awkward.Of course, you do!What I like about Go's solution is that it is consistent with closures. When a group of locals escape with a closure (that is when the address of the local function using that group is taken) they are copied to heap. When a local escape by ref (that is when the address of the local is taken), it is also copied to heap.I understand how it works.There is a downside to it, though. In D2, closures get copied to the GC heap if there is a possibility of an escaping reference. A lot of people complain about this being unexpected hidden overhead.Yeah, I don't like these hidden allocations either.The trouble with "copy any ref'd local to the heap" automatically happening is the biggest advantage of passing by ref (efficiency) is automatically lost. Even if it does not escape, it is copied to the heap anyway, as you point out below.Indeed. Why create a stack-allocated local if it is going to be copied to the heap anyway?I have no idea how often closures are used. I use them rarely but some people do crazy things with them.What I don't like about Go's closures/addresses-to-locals and D's delegates is that stuff is heap-allocated implicitly and by default. Go has even gone (sorry) as far as allocating copies *every* time the address of a local is taken.Exactly. (Though D won't do the copy if it can prove that the delegate does not escape.)That reminds me of the argument about "new" being necessary for classes because it makes the heap allocation explicit. It is difficult to say good-bye to "new" but at the same time we are somehow happy with implicitly allocated closures.I think that implicitly allocated closures are a lot less common than passing a local by reference.
Oct 16 2010
On 2010-10-15 04:49:19 -0400, Max Samukha <spambox d-coding.com> said:static S s; ref S foo() { return s; }Pointers are allowed in SafeD; pointer arithmetic is not. Taking the address of a static or global variable should be allowed. If this doesn't compile, you should report it a bug: static S s; S* foo() { return &s; } There's nothing unsafe about taking the address of a static or global variable since the pointer can never outlive the variable. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Oct 15 2010
On 10/15/2010 02:14 PM, Michel Fortin wrote:On 2010-10-15 04:49:19 -0400, Max Samukha <spambox d-coding.com> said:Ok, makes sense.static S s; ref S foo() { return s; }Pointers are allowed in SafeD; pointer arithmetic is not. Taking the address of a static or global variable should be allowed. If this doesn't compile, you should report it a bug: static S s; S* foo() { return &s; } There's nothing unsafe about taking the address of a static or global variable since the pointer can never outlive the variable.
Oct 15 2010
On Fri, 15 Oct 2010 03:23:00 +0400, Nick Sabalausky <a a.a> wrote:"bearophile" <bearophileHUGS lycos.com> wrote in message news:i97utq$d7e$1 digitalmars.com...First, compiler doing pointer arithmetics != user doing pointer arithmetic. Second, I believe it's not about a danger or accidental pointer arithmetic usage, it's more about syntax (and ambiguities it introduces). For example, I once suggested using pointer syntax for classes too, and provided tons of arguments for that (ranging from solving tail-const issue to solving many language inconstancies that are in D between struct/class syntax and a lot more), plus a ton of additional functionality it optionally can provide if implemented. There was only one problem with that - pointer arithmetic syntax came into the way. E.g. Foo* foo = new Foo(); foo += 1; // is that operator overloading or pointer arithmetic? Foo foo = new Foo(); foo += 1; // compare to current version I still hope we deprecate pointer arithmetic and introduce another syntax for it for a next major D revision (i.e. D3)- In my D programs I sometimes use pointers, but pointer arithmetic is indeed uncommon.If you're actually doing systems-level or high-performance work, it can be essential in certain cases depending on how good the optimizer is. Loops like this are fairly typical (using 'for' instead of 'foreach'/'map'/etc for clarity): T[] myArray = ...; for(int i=0; i<max; i++) { myArray[i] // <- do something with that } If the compiler isn't smart enough to turn that into this: T[] myArray = ...; auto ptr = myArray.ptr; auto end = myArray.ptr + max; for(auto ptr = myArray.ptr; ptr<end; ptr++) { *myArray // <- do something with that } Then you're wasting cycles every iteration (by doing an extra addition and maybe an extra shift or even multiplication depending on T: Ie, (cast(ubyte*)myArray.ptr) + i * T.sizeof). That was a pretty common inner-loop optimization back in my C days. And keep in mind, of course, real-world examples can be much more complex than that, so even if the compiler can handle trivial cases like this (I have no idea if it can, although using 'foreach' would probably make it easier - in some cases), it might not work for other cases. So unless the optimizer was known to be that good even in complex cases, I wouldn't want to be without pointer arithmetic. It's not needed often, but when it is needed it's indispensable (and still results in much more readable/portable code then delving down to asm). Plus, I've never once done pointer arithmetic accidentally in D, so I don't see any safety to be gained from not allowing it.- Turning x++; into statements seems harsh, but indeed it solves some problems. In practice in my D programs the ++ is often used as a statement, to avoid bugs.I've long been of the opinion that should just be a statement. All it ever does as an expression, if anything, is obfuscate code. I've never once seen a case where it clarified anything.- Segmented stack: allows to avoid some stack overflows at the price of a bit of delay at calling functions.Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)
Oct 14 2010
Nick Sabalausky wrote:Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)16 bit processors died around 15 years after the introduction of 32 bit ones, even for embedded systems. If history repeats itself, figure 32 bit ones have about 5 years to go! As for what phones need, 15 years ago, who'd a thunk we'd be using phones today for internet browsing and playing feature movies? We were all simply enthralled by a phone that didn't have a cord attached to it and fit in your pocket. I came up with a really good virtual memory system for 16 bit code. The only problem was, by the time I figured it out, the people that needed it had moved on to protected mode with hardware vm. I feel D will be better off preparing for the coming 64 bit tsunami.
Oct 14 2010
Walter Bright wrote:Nick Sabalausky wrote:Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)=20 16 bit processors died around 15 years after the introduction of 32 bit=ones, even for embedded systems. If history repeats itself, figure 32 bit ones have about 5 years to go! =20Funny thing is we still use some 8-bit microcontrollers in some situations :) But you're right, as soon as we need something more we go directly to 32 bits without stopping in the 16 bits square. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Oct 16 2010
Jérôme M. Berger wrote:Walter Bright wrote:I can tell 16 bits is dead as a doornail because the 16 bit tools biz has dried up to nothing.Nick Sabalausky wrote:Funny thing is we still use some 8-bit microcontrollers in some situations :) But you're right, as soon as we need something more we go directly to 32 bits without stopping in the 16 bits square.Seems a bad idea to force the overhead of that, but it should definitely be available as an option. Contrary to what Walter and Andrei seem to think, 32-bit systems are still very much alive and will be for quite awhile longer. Especially when you remember that there are more computers out there than just desktops and servers. (Ex: When is a phone ever going to need 64-bit? Eventually maybe, but certainly not anytime soon.)16 bit processors died around 15 years after the introduction of 32 bit ones, even for embedded systems. If history repeats itself, figure 32 bit ones have about 5 years to go!
Oct 16 2010
Walter Bright:bearophile wrote:A little test program: import std.c.stdio: printf; size_t add1(size_t[] arr) { size_t sum = 0; for (size_t i = 0; i < arr.length; i++) sum += arr[i]; return sum; } size_t add2(size_t[] arr) { size_t sum = 0; size_t* p = arr.ptr; for (size_t i = 0; i < arr.length; i++) sum += p[i]; return sum; } size_t add3(size_t[] arr) { size_t sum = 0; size_t* p = arr.ptr; for (size_t i = 0; i < arr.length; i++) sum += *p++; return sum; } void main() { auto arr = new size_t[10_000_000]; foreach (size_t i, ref el; arr) el = i; printf("%u\n", add1(arr)); printf("%u\n", add2(arr)); printf("%u\n", add3(arr)); } --------------------- dmd 2.049 dmd -O -release -inline _D4test4add1FAkZk comdat push EAX xor ECX,ECX xor EDX,EDX push EBX cmp 0Ch[ESP],ECX je L28 mov 4[ESP],EDX mov EDX,010h[ESP] mov EBX,EDX mov EAX,0Ch[ESP] mov EDX,4[ESP] L1E: add ECX,[EDX*4][EBX] inc EDX cmp EDX,0Ch[ESP] jb L1E L28: pop EBX mov EAX,ECX pop ECX ret 8 _D4test4add2FAkZk comdat xor ECX,ECX xor EDX,EDX cmp 4[ESP],ECX je L18 LA: mov EAX,8[ESP] add ECX,[EDX*4][EAX] inc EDX cmp EDX,4[ESP] jb LA L18: mov EAX,ECX ret 8 _D4test4add3FAkZk comdat push EBX xor EDX,EDX xor ECX,ECX cmp 8[ESP],ECX mov EBX,0Ch[ESP] je L1D LF: mov EAX,EBX add EBX,4 inc ECX add EDX,[EAX] cmp ECX,8[ESP] jb LF L1D: pop EBX mov EAX,EDX ret 8 This has an influence on running time too. Bye, bearophileWith D sometimes array-based code is faster than pointer-based. With LDC they are usually equally efficient.??? This makes no sense. The (ptr+i*T.sizeof) is an addressing mode on the x86, and comes at ZERO cost.
Oct 15 2010
On 10/14/10 17:06 CDT, bearophile wrote:Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go": http://go.googlecode.com/hg/doc/ExpressivenessOfGo.pdf http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/It's a good deck I think. I made a comment on reddit: http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/c12dkde Andrei
Oct 15 2010
"bearophile" <bearophileHUGS lycos.com> wrote in message news:i97utq$d7e$1 digitalmars.com...Found through Reddit, talk slides by Rob Pike, "The Expressiveness of Go": http://go.googlecode.com/hg/doc/ExpressivenessOfGo.pdf http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_expressiveness_of_go_pdf/ This time I think I have understood most of the contents of the slides :-) Few interesting quotations: From Page 18: There are pointers but no pointer arithmetic - pointers are important to performance, pointer arithmetic not. - although it's OK to point inside a struct. - important to control layout of memory, avoid allocation Increment/decrement (p++) are statements, not expressions. - no confusion about order of evaluation Addresses last as long as they are needed. - take the address of a local variable, the implementation guarantees the memory survives while it's referenced. No implicit numerical conversions (float to int, etc.). - C's "usual arithmetic conversions" are a minefield. From page 19 and 20: Constants are "ideal numbers": no size or sign, hence no L or U or UL endings. Arithmetic with constants is high precision. Only when assigned to a variable are they rounded or truncated to fit. A typed element in the expression sets the true type of the constant. From page 40: Goroutines have "segmented stacks": go f() starts f() executing concurrently on a new (small) stack. Stack grows and shrinks as needed. No programmer concern about stack size. No possibility for stack overflow. A couple of instructions of overhead on each function call, a huge improvement in simplicity and expressiveness. From page 46: The surprises you discover will be pleasant ones.I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".
Oct 15 2010
On 10/15/10 16:25 CDT, Nick Sabalausky wrote:I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 15 2010
On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh... AndreiI just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 15 2010
Andrei Alexandrescu wrote:On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:I see we should invite JokeExplainer to the forums!On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 15 2010
On 16/10/2010 00:15, Walter Bright wrote:Andrei Alexandrescu wrote:I didn't get it... :/ (Nick's joke that is) -- Bruno Medeiros - Software EngineerOn 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:I see we should invite JokeExplainer to the forums!On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Nov 11 2010
On 11/11/10 22:56, Bruno Medeiros wrote:On 16/10/2010 00:15, Walter Bright wrote:Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, JustinAndrei Alexandrescu wrote:I didn't get it... :/ (Nick's joke that is)On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:I see we should invite JokeExplainer to the forums!On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Nov 11 2010
Addendum: Too be sure, I think I forgot to say "monads" sounds like "gonads".Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, Justin
Nov 11 2010
On 11/11/2010 12:10, Justin Johansson wrote:On 11/11/10 22:56, Bruno Medeiros wrote:So Nick already had "gonads" in mind on that post, is that the case? -- Bruno Medeiros - Software EngineerOn 16/10/2010 00:15, Walter Bright wrote:Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, JustinAndrei Alexandrescu wrote:I didn't get it... :/ (Nick's joke that is)On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:I see we should invite JokeExplainer to the forums!On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Nov 12 2010
"Bruno Medeiros" <brunodomedeiros+spam com.gmail> wrote in message news:ibjd5l$2pv$1 digitalmars.com...On 11/11/2010 12:10, Justin Johansson wrote:My intended joke: Google Go has "coroutines" that it calls "goroutines" ( Because "go" + "coroutines" == "goroutines"). So I applied the same cutesy naming to "monads": "go" + "monads" == "gonads". And like Justin said, "gonads" also means "testicles" (and sometimes "ovaries"), so it's a pun and a rather odd name for a programming language feature. And somewhat ironically, it *would* take some serious gonads to name a language feature "gonads". (In English, saying that something requires balls/gonads/nuts/etc is a common slang way of saying it requires courage.)On 11/11/10 22:56, Bruno Medeiros wrote:So Nick already had "gonads" in mind on that post, is that the case?On 16/10/2010 00:15, Walter Bright wrote:Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, JustinAndrei Alexandrescu wrote:I didn't get it... :/ (Nick's joke that is)On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:I see we should invite JokeExplainer to the forums!On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Nov 23 2010
On 24/11/2010 01:37, Nick Sabalausky wrote:"Bruno Medeiros"<brunodomedeiros+spam com.gmail> wrote in message news:ibjd5l$2pv$1 digitalmars.com...Ok, just checking, thanks for the clarification. (I'm sometimes a bit obtuse with things like this)On 11/11/2010 12:10, Justin Johansson wrote:My intended joke: Google Go has "coroutines" that it calls "goroutines" ( Because "go" + "coroutines" == "goroutines"). So I applied the same cutesy naming to "monads": "go" + "monads" == "gonads". And like Justin said, "gonads" also means "testicles" (and sometimes "ovaries"), so it's a pun and a rather odd name for a programming language feature.On 11/11/10 22:56, Bruno Medeiros wrote:So Nick already had "gonads" in mind on that post, is that the case?On 16/10/2010 00:15, Walter Bright wrote:Hi Bruno, It is an English language word play on sound-alike words. Google on: "define: gonads" I think Nick was suggesting that someone/something gets some "balls" though "ovaries" might not be out of the question also. :-) Trusting this explains well in your native language. Regards, JustinAndrei Alexandrescu wrote:I didn't get it... :/ (Nick's joke that is)On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:I see we should invite JokeExplainer to the forums!On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei(In English, saying that something requires balls/gonads/nuts/etc is a common slang way of saying it requires courage.)Yeah, that I know already. :) -- Bruno Medeiros - Software Engineer
Nov 24 2010
On 16/10/2010 9:34 AM, Andrei Alexandrescu wrote:On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:Coincidentally, the official mailing list for Go PL is known as go-nuts! :-)On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh... AndreiI just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 15 2010
On 10/15/10 19:18 CDT, Justin Johansson wrote:On 16/10/2010 9:34 AM, Andrei Alexandrescu wrote:Speaking of which, I gave one more read this morning to the reddit discussion (http://www.reddit.com/r/programming/comments/dr6r4/talk_by_rob_pike_the_express veness_of_go_pdf/). Boy, that didn't Go well. AndreiOn 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:Coincidentally, the official mailing list for Go PL is known as go-nuts! :-)On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh... AndreiI just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 16 2010
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:i9akvf$1jlq$3 digitalmars.com...On 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:Well it was a bit opaque. I was actually wondering if anyone would make the connection at all. :)On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 15 2010
On 10/15/2010 11:26 PM, Nick Sabalausky wrote:"Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:i9akvf$1jlq$3 digitalmars.com...Much obliged to play Captain Obvious' role. AndreiOn 10/15/10 17:34 CDT, Andrei Alexandrescu wrote:Well it was a bit opaque. I was actually wondering if anyone would make the connection at all. :)On 10/15/10 16:25 CDT, Nick Sabalausky wrote:Wait, that was your actual joke. Sighhhh...I just hope they get serious enough about functional programming to gain some monads to go along with their "goroutines".They should call them "gonads". Andrei
Oct 16 2010
(Catching some older posts, I was busy) Walter:In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost. Bye, bearophile
Oct 17 2010
Assembly is vital for almost all CPU-bound applications. Making it inline just makes people's lives easier. On 10/17/10 20:23, bearophile wrote:(Catching some older posts, I was busy) Walter:-- Regards, -- ClarkIn any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost. Bye, bearophile
Oct 17 2010
bearophile wrote:Walter:It's cost is a lot lower than writing the assembler in a separate asm file.In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost.
Oct 17 2010
Sorry maybe that is just me but that is not really an argument, if you want to build a rocket, you would hire capable people. On Mon, 18 Oct 2010 03:23:21 +0300, bearophile <bearophileHUGS lycos.com> wrote:(Catching some older posts, I was busy) Walter:-- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/In any case, inline assembler in D is a substantial productivity booster for me for anything that needs assembler. The inline assembler is also quite ignorable, if you don't like it.I like the inline assembly feature of D. But language features aren't ignorable, in real world you often meed or have to modify or fix code written by other people. This means that a programmer that doesn't know assembly may be forced to fix bugs in modules that contain functions with asm. So every language feature is not free, it has a cost. Bye, bearophile
Oct 18 2010
"so" <so so.do> wrote in message news:op.vkrh77fb7dtt59 so-pc...Sorry maybe that is just me but that is not really an argument, if you want to build a rocket, you would hire capable people.It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.
Oct 18 2010
Nick Sabalausky:It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
Oct 18 2010
Am 18.10.2010 22:49, schrieb bearophile:Nick Sabalausky:This is one of the reasons why Java has become such a huge language in the IT world.It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
Oct 19 2010
On 19/10/2010 21:24, Paulo Pinto wrote:Am 18.10.2010 22:49, schrieb bearophile:yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of us out of 18 will *ever* write template code. even for really trival stuff. In my xp, most c++ programmers just don't/can't get templates and I very much doubt that awkward syntax is the root cause. if you are one of those people why whould you chose a language with templates? they are off no dam use to you. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.ukNick Sabalausky:This is one of the reasons why Java has become such a huge language in the IT world.It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
Oct 19 2010
Tue, 19 Oct 2010 21:30:44 +0100, div0 wrote:On 19/10/2010 21:24, Paulo Pinto wrote:Templates are used for at least two different purposes - to provide 1) (generic) parametric polymorphism and 2) (generative) metaprogramming code. Often the parametric version is enough (e.g. simple uses of collections). The first case is "optimized" in many modern language. For instance in Scala polymorphic collections are rather simple to use: // using namespace std; val l = List(1,2,3) // list<int> l(1,2,3); println("The contents are: ") // cout << "The contents are: "; println(l.mkString(" ")) // for (list<int>::iterator it = l.begin(); it != l.end(); it++) // cout << *it << " "; // cout << endl; println("Squared: ") // cout << "Squared: "; println(l.map(2 *).mkString(" ")) // for (list<int>::iterator it = l.begin(); it != l.end(); it++) // cout << (*it)*(*it) << " "; // cout << endl; Typical use cases don't require type annotations anywhere. The only problem with high level languages is that they may in some cases put more pressure to the optimizations in the compiler. What's funny is that the Scala developer here "implicitly" used terribly complex templates behind the scenes. And it's as simple as writing in some toy language. Overall, even the novice developers are so expensive that you can often replace the lost effiency with bigger hardware, which is cheaper than the extra development time would have been. This is many times the situation *now*, it might change when the large cloud servers run out of resources.Am 18.10.2010 22:49, schrieb bearophile:yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of us out of 18 will *ever* write template code. even for really trival stuff. In my xp, most c++ programmers just don't/can't get templates and I very much doubt that awkward syntax is the root cause. if you are one of those people why whould you chose a language with templates? they are off no dam use to you.Nick Sabalausky:This is one of the reasons why Java has become such a huge language in the IT world.It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophile
Oct 19 2010
Tue, 19 Oct 2010 22:55:31 +0000, retard wrote:println(l.map(2 *).mkString(" "))Made a mistake here, the correct code should be: println(l.map(a => a*a).mkString(" "))
Oct 19 2010
retard Wrote:Tue, 19 Oct 2010 21:30:44 +0100, div0 wrote:Complex C++/D collections are no simple generics. They have custom allocators and so forth. Study your home work, kid.On 19/10/2010 21:24, Paulo Pinto wrote:Templates are used for at least two different purposes - to provide 1) (generic) parametric polymorphism and 2) (generative) metaprogramming code. Often the parametric version is enough (e.g. simple uses of collections).Am 18.10.2010 22:49, schrieb bearophile:yeah but to be fair, I work in a fully C++ shop and only 3 (maybe 4) of us out of 18 will *ever* write template code. even for really trival stuff. In my xp, most c++ programmers just don't/can't get templates and I very much doubt that awkward syntax is the root cause. if you are one of those people why whould you chose a language with templates? they are off no dam use to you.Nick Sabalausky:This is one of the reasons why Java has become such a huge language in the IT world.It's amazing how many software houses/departments don't do that. But of course, if they don't it's their own damn problem.They want low-salary programmers, so they will avoid languages that may lead to higher salaries. This means uncommon languages (where programmers are more rare) or languages that may need the ability to read (or even write) "harder code" (like inline assembly). Bye, bearophileThe first case is "optimized" in many modern language. For instance in Scala polymorphic collections are rather simple to use:Ha, you don't know anything of Java VM now do you? Type erasure removes all efficiency and makes your stupid code run at least twice slower than real generics. On top of that comes VM start up time and other garbage collection costs. Your solution is screwed when put against real native C++/D meta programming. [snip ugly Scala & C++]Typical use cases don't require type annotations anywhere. The only problem with high level languages is that they may in some cases put more pressure to the optimizations in the compiler.We want overly complex compilers with 10+ seconds run time? Hell no.What's funny is that the Scala developer here "implicitly" used terribly complex templates behind the scenes. And it's as simple as writing in some toy language.Scala is just academic toy.Overall, even the novice developers are so expensive that you can often replace the lost effiency with bigger hardware, which is cheaper than the extra development time would have been. This is many times the situation *now*, it might change when the large cloud servers run out of resources.Slow code costs more in cloud services even today. You want cheap ? You write in native code.
Oct 19 2010