digitalmars.D - Remove real type
- bearophile (15/15) Apr 21 2010 I suggest to remove the real type from D2 because:
- Bill Baxter (41/56) Apr 21 2010 I don't find it that useful either. Seems to me the only use is to
- Eric Poggel (4/34) Apr 21 2010 Just a personal preference, but I always disliked "secret" features of
- BCS (7/9) Apr 22 2010 There are some cases where you simply want to keep as much precision as ...
- Bill Baxter (15/22) Apr 22 2010 So what's the advice you would you give to Joe coder about when to use '...
- Steven Schveighoffer (15/36) Apr 22 2010 Most cases where real turns out a different result than double are
- BCS (7/20) Apr 22 2010 If you have no special reason to use float or double and are not IO or m...
- bearophile (4/8) Apr 22 2010 Take one of the little ray-tracers written in D from my site, test its r...
- Robert Jacques (5/19) Apr 22 2010 Ray-tracers are insanely memory IO bound, not compute bound, bearophile....
- bearophile (11/13) Apr 23 2010 It's more like 96 bits with LDC on Ubuntu.
- BCS (8/10) Apr 23 2010 Exactly, that is the one case where float and double are better, where l...
- Don (3/16) Apr 23 2010 A simple rule of thumb: if it's an array, use float or double. If it's
- Walter Bright (2/4) Apr 23 2010 I agree. The only reason to use float or double is to save on storage.
- bearophile (49/50) Apr 23 2010 A little D1 program, that I compile with LDC:
- Don (3/20) Apr 24 2010 It looks OK to me in this example.
- Mike Farnsworth (4/9) Apr 23 2010 There is another reason: performance, when combined with vectorized code...
- Walter Bright (3/16) Apr 23 2010 I agree that rendering is different, and likely is a quite different thi...
- strtr (4/9) Apr 23 2010 Portability will become more important as evo algos get used more.
- Walter Bright (2/6) Apr 24 2010 You've got a bad algorithm if increasing the precision breaks it.
- strtr (4/12) Apr 24 2010 No, I don't.
- BCS (8/20) Apr 24 2010 If you don't know what the algorithms is doing then the types used are p...
- strtr (7/28) Apr 24 2010 I'm not sure on when to define a type as the algorithm.
- Andrei Alexandrescu (5/20) Apr 24 2010 I'm not an expert in GA, but I can tell that a neural network that is
- strtr (5/28) Apr 24 2010 http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Tr...
- Andrei Alexandrescu (4/33) Apr 24 2010 Meh. You can't train using a gradient method unless the output is smooth...
- strtr (2/39) Apr 24 2010 Which was exactly why I mentioned evolutionary algorithms.
- Andrei Alexandrescu (8/47) Apr 24 2010 So are you saying there are neural networks with thresholds that are
- strtr (6/16) Apr 24 2010 I would love to see your results :)
- Andrei Alexandrescu (15/38) Apr 24 2010 You shouldn't care.
- strtr (12/57) Apr 25 2010 Why do you think this? Because I'm pretty sure I do care about this.
- larry coder (2/5) Apr 25 2010 Don't worry, lad. Not everyone has or even needs to have the skill to be...
- Andrei Alexandrescu (11/23) Apr 25 2010 On a different vein, I'm a fan of disclosing true identity of posters.
- strtr (4/15) Apr 25 2010 I would recommend Izhikevich's work.
- strtr (3/11) Apr 25 2010 And you could also not have happiness as a goal.
- Walter Bright (8/15) Apr 24 2010 You're going to have nothing but trouble with such a program. It won't b...
- strtr (5/24) Apr 24 2010 Most of the training will be done on the user's computer which would nul...
- Walter Bright (2/4) Apr 24 2010 It is standard IEEE 754 floating point.
- strtr (2/7) Apr 24 2010 Most math functions I see in std.math take reals as input. Should I use ...
- Walter Bright (6/15) Apr 24 2010 If you're content with inaccurate and non-portable answers. Don and I ha...
- BCS (7/9) Apr 24 2010 I'm not sure even x86 /requiters/ bit perfect FP math across different m...
- strtr (3/12) Apr 24 2010 I'm not really searching for perfect/fixed math, but that the math is co...
- Walter Bright (3/5) Apr 24 2010 Yes, but you'll have to avoid the math functions if you're using C ones....
- strtr (5/11) Apr 24 2010 So as long as I only use floats (and doubles) and use the std.math funct...
- Walter Bright (6/19) Apr 24 2010 No, because the implementations of them vary from platform & compiler to
- strtr (5/29) Apr 25 2010 yay :)
- Walter Bright (3/6) Apr 25 2010 x = sin(a + b);
- strtr (3/11) Apr 25 2010 Ok, by vary you meant vary at compilation on different platforms. Not af...
- bearophile (4/6) Apr 23 2010 The rule of thumb from me is: if you care for performance never use real...
- BCS (5/12) Apr 24 2010 Has anyone actually teased to show that the vector ops are actually fast...
- BCS (8/20) Apr 22 2010 Ray tracing is very amiable to that sort of optimization, particularly i...
- Walter Bright (6/7) Apr 21 2010 D being a systems programming language should give access to the types
- abcd (5/14) Apr 21 2010 On the other hand, being an engineer, I use the reals all the time and
- strtr (2/7) Apr 21 2010 For me it's the exact opposite, reproducibility/portability is key. My p...
- Walter Bright (5/16) Apr 21 2010 With numerical work, I suggest getting the correct answer is preferable
- strtr (2/19) Apr 22 2010 The funny thing is that getting the exact correct answer is not that big...
- Walter Bright (13/36) Apr 22 2010 In my experience doing numerical work, loss of a "few bits" of precision...
- strtr (7/22) Apr 22 2010 My work is probably not classified as numerical work as I don't much car...
- Robert Jacques (4/13) Apr 21 2010 You do realize that the x86 floating point unit _always_ promotes floats...
- strtr (3/20) Apr 22 2010 Does this mean that float calculations are always off between intel and ...
- Lars T. Kyllingstad (4/24) Apr 22 2010 No, I believe AMD processors also use 80 bits of precision, since they
- strtr (2/10) Apr 22 2010 Thanks, hoped as much. Yay standards :)
- Bob Jones (5/20) Apr 22 2010 You can set the internal precision of the x87 unit to 32, 64 or 80 bits,...
- Walter Bright (7/10) Apr 22 2010 Despite those settings, the fpu still holds intermediate calculations to...
- Bob Jones (16/24) Apr 23 2010 Not true. If you load from memory it will keep the precision of what it
- Walter Bright (3/33) Apr 23 2010 Then there was something else that wasn't rounded down, as the Java peop...
- eles (24/24) Apr 22 2010 i am for high numerical acuracy (as high as possible).
- Robert Jacques (5/8) Apr 22 2010 This is called a quad in IEEE nomenclature. There's also a half. And you...
- eles (2/2) Apr 22 2010 no matter the name. it is the accuracy that matters.
- Robert Jacques (5/20) Apr 22 2010 P.S. An implication of this is that using any type other than real resul...
- dennis luehring (27/43) Apr 21 2010 reals in D.
- bearophile (14/31) Apr 22 2010 Modern hardware doesn't support it. I think hardware will win.
- Walter Bright (4/7) Apr 22 2010 You're right that the amount of 0 padding changes between language
- dennis luehring (10/18) Apr 22 2010 which modern x86 hardware do not support the 80bit fpu type?
- bearophile (8/11) Apr 22 2010 Current SSE registers, future AVX registers, all/most GPUs of the presen...
- Don (9/11) Apr 22 2010 The immediate problem is, that x87 does not fully support floats and
- BCS (23/51) Apr 22 2010 You can if you include error bounds. IIRC the compiler is free to do a l...
- bearophile (9/12) Apr 22 2010 I do FP numeric processing only once in a while, not much.
I suggest to remove the real type from D2 because: - It's the only native type that has not specified length. I'm sold on the usefulness of having defined length types. Unspecified length types causes some of the troubles caused by C types that D has tried to avoid defining the size of its integral types. - Its length is variable across operating systems, it can be 10, 12, 16 bytes, or even just 8 if they get implemented with doubles. The 12 and 16 bytes long waste space. - Results that you can find with programs written in other languages are usually computed with just floats or doubles. If I want to test if a D program gives the same results I can't use reals in D. - I don't see reals (long doubles in C) used much in other languages. - If I compile a program with LDC that performs computations on FP values, and I take a look at the asm it produces, I can see onl SSE-related instructions. And SSE registers allow for 32 and 64 bit FP only. I think future AVX extensions too don't support 79/80 bit floats. GPUs are increasingly used to perform computations, and they don't support 80 bit floats. So I think they are going to be obsolete. Five or ten years from now most numerical programs will probably not use 80 bit FP. - Removing a built-in type makes the language and its manual a little simpler. - I have used D1 for some time, but so far I have had hard time to find a purpose for 80 bit FP numbers. The slight increase in precision is not so useful. - D implementations are free to use doubles to implement the real type. So in a D program I can't rely on their little extra precision, making them not so useful. - While I think the 80 bit FP are not so useful, I think Quadrupe precision FP (128 bit, currently usually software-implemented) can be useful for some situations, (http://en.wikipedia.org/wiki/Quadruple_precision ). They might be useful for High dynamic range imaging too. LLVM SPARC V9 will support its quad-precision registers. - The D2 specs say real is the "largest hardware implemented floating point size", this means that they can be 128 bit too in future. A numerical simulation that is designed to work with 80 bit FP numbers (or 64 bit FP numbers) can give strange results with 128 bit precision. So I suggest to remove the real type; or eventually replace it with fixed-sized 128 bit floating-point type with the same name (implemented using a software emulation where the hardware doesn't have them, like the __float128 of GCC: http://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html ). In far future, if the hardware of CPUs will support FP numbers larger than 128 bits, a larger type can be added if necessary. Bye, bearophile
Apr 21 2010
I don't find it that useful either. Seems to me the only use is to preserve a few more bits in intermediate computations. But finite precision is finite precision. If you're running up against the limitations of doubles, then chances are it's not just a few more bits you need -- you either need to rethink your algorithm or go to variable precision floats. Maybe just rename 'real' to something less inviting, so that only the people who really need it will be tempted to use it. Like __real or __longdouble, or __tempfloat or something. --bb On Wed, Apr 21, 2010 at 3:38 PM, bearophile <bearophileHUGS lycos.com> wrot= e:I suggest to remove the real type from D2 because: - It's the only native type that has not specified length. I'm sold on th=e usefulness of having defined length types. Unspecified length types cause= s some of the troubles caused by C types that D has tried to avoid defining= the size of its integral types.- Its length is variable across operating systems, it can be 10, 12, 16 b=ytes, or even just 8 if they get implemented with doubles. The 12 and 16 by= tes long waste space.- Results that you can find with programs written in other languages are =usually computed with just floats or doubles. If I want to test if a D prog= ram gives the same results I can't use reals in D.- I don't see reals (long doubles in C) used much in other languages. - If I compile a program with LDC that performs computations on FP values=, and I take a look at the asm it produces, I can see onl SSE-related instr= uctions. And SSE registers allow for 32 and 64 bit FP only. I think future = AVX extensions too don't support 79/80 bit floats. GPUs are increasingly us= ed to perform computations, and they don't support 80 bit floats. So I thin= k they are going to be obsolete. Five or ten years from now most numerical = programs will probably not use 80 bit FP.- Removing a built-in type makes the language and its manual a little sim=pler.- I have used D1 for some time, but so far I have had hard time to find a=purpose for 80 bit FP numbers. The slight increase in precision is not so = useful.- D implementations are free to use doubles to implement the real type. S=o in a D program I can't rely on their little extra precision, making them = not so useful.- While I think the 80 bit FP are not so useful, I think Quadrupe precisi=on FP (128 bit, currently usually software-implemented) can be useful for s= ome situations, (http://en.wikipedia.org/wiki/Quadruple_precision ). They m= ight be useful for High dynamic range imaging too. LLVM SPARC V9 will suppo= rt its quad-precision registers.- The D2 specs say real is the "largest hardware implemented floating poi=nt size", this means that they can be 128 bit too in future. A numerical si= mulation that is designed to work with 80 bit FP numbers (or 64 bit FP numb= ers) can give strange results with 128 bit precision.So I suggest to remove the real type; or eventually replace it with fixed=-sized 128 bit floating-point type with the same name (implemented using a = software emulation where the hardware doesn't have them, like the __float12= 8 of GCC: http://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html ).In far future, if the hardware of CPUs will support FP numbers larger tha=n 128 bits, a larger type can be added if necessary.Bye, bearophile
Apr 21 2010
On 4/21/2010 7:00 PM, Bill Baxter wrote:I don't find it that useful either. Seems to me the only use is to preserve a few more bits in intermediate computations. But finite precision is finite precision. If you're running up against the limitations of doubles, then chances are it's not just a few more bits you need -- you either need to rethink your algorithm or go to variable precision floats. Maybe just rename 'real' to something less inviting, so that only the people who really need it will be tempted to use it. Like __real or __longdouble, or __tempfloat or something. --bb On Wed, Apr 21, 2010 at 3:38 PM, bearophile<bearophileHUGS lycos.com> wrote:Just a personal preference, but I always disliked "secret" features of languages hidden behind underscores. I feel like features should be part of the standard spec or not there at all.I suggest to remove the real type from D2 because: - It's the only native type that has not specified length. I'm sold on the usefulness of having defined length types. Unspecified length types causes some of the troubles caused by C types that D has tried to avoid defining the size of its integral types. - Its length is variable across operating systems, it can be 10, 12, 16 bytes, or even just 8 if they get implemented with doubles. The 12 and 16 bytes long waste space. - Results that you can find with programs written in other languages are usually computed with just floats or doubles. If I want to test if a D program gives the same results I can't use reals in D. - I don't see reals (long doubles in C) used much in other languages. - If I compile a program with LDC that performs computations on FP values, and I take a look at the asm it produces, I can see onl SSE-related instructions. And SSE registers allow for 32 and 64 bit FP only. I think future AVX extensions too don't support 79/80 bit floats. GPUs are increasingly used to perform computations, and they don't support 80 bit floats. So I think they are going to be obsolete. Five or ten years from now most numerical programs will probably not use 80 bit FP. - Removing a built-in type makes the language and its manual a little simpler. - I have used D1 for some time, but so far I have had hard time to find a purpose for 80 bit FP numbers. The slight increase in precision is not so useful. - D implementations are free to use doubles to implement the real type. So in a D program I can't rely on their little extra precision, making them not so useful. - While I think the 80 bit FP are not so useful, I think Quadrupe precision FP (128 bit, currently usually software-implemented) can be useful for some situations, (http://en.wikipedia.org/wiki/Quadruple_precision ). They might be useful for High dynamic range imaging too. LLVM SPARC V9 will support its quad-precision registers. - The D2 specs say real is the "largest hardware implemented floating point size", this means that they can be 128 bit too in future. A numerical simulation that is designed to work with 80 bit FP numbers (or 64 bit FP numbers) can give strange results with 128 bit precision. So I suggest to remove the real type; or eventually replace it with fixed-sized 128 bit floating-point type with the same name (implemented using a software emulation where the hardware doesn't have them, like the __float128 of GCC: http://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html ). In far future, if the hardware of CPUs will support FP numbers larger than 128 bits, a larger type can be added if necessary. Bye, bearophile
Apr 21 2010
Hello Bill,Seems to me the only use is to preserve a few more bits in intermediate computations.There are some cases where you simply want to keep as much precision as you can. In those cases variable precision floats aren't any better of a solution as you would just turn them up as far as you can without unacceptable cost elsewhere. -- ... <IXOYE><
Apr 22 2010
On Thu, Apr 22, 2010 at 11:27 AM, BCS <none anon.com> wrote:Hello Bill,So what's the advice you would you give to Joe coder about when to use 'real'? My argument is that it is probably something like "if you don't know why you need it then you probably don't need it". And I suspect very few people actually need it. But as is, it looks like something that one ought to use. It has the shortest name of all the floating point types. And the description sounds pretty good -- more precision, hardware supported. Wow! Why wouldn't I want that? But the fact is that most people will never need it. And Bearophile listed some good reasons why not to use it in general circumstances. I think it is nice to have available, but I don't think it needs to occupy such a plumb spot in the language namespace. It's kind of like a siren luring unwary coders to use it, when they would be better off staying away. --bbSeems to me the only use is to preserve a few more bits in intermediate computations.There are some cases where you simply want to keep as much precision as you can. In those cases variable precision floats aren't any better of a solution as you would just turn them up as far as you can without unacceptable cost elsewhere.
Apr 22 2010
On Thu, 22 Apr 2010 14:50:36 -0400, Bill Baxter <wbaxter gmail.com> wrote:On Thu, Apr 22, 2010 at 11:27 AM, BCS <none anon.com> wrote:Most cases where real turns out a different result than double are floating point error related. For example, something that converges with doubles may not converge with reals, resulting in an infinite loop and a perceived difference where reals are seen as 'bad'. However, the problem really is that reals have exposed a flaw in your algorithm where by the right circumstances, the double version worked. I also find bearophile's requirement to be able to check that a D program outputs the same exact floating point digits as a C program ridiculous. IMO, there is no benefit to using doubles over reals except for storage space. And that is not a good reason to get rid of real. You see a benefit (more precision) for a cost (more storage). It's not a confusing concept. -SteveHello Bill,So what's the advice you would you give to Joe coder about when to use 'real'? My argument is that it is probably something like "if you don't know why you need it then you probably don't need it". And I suspect very few people actually need it. But as is, it looks like something that one ought to use. It has the shortest name of all the floating point types. And the description sounds pretty good -- more precision, hardware supported. Wow! Why wouldn't I want that? But the fact is that most people will never need it. And Bearophile listed some good reasons why not to use it in general circumstances.Seems to me the only use is to preserve a few more bits in intermediate computations.There are some cases where you simply want to keep as much precision as you can. In those cases variable precision floats aren't any better of a solution as you would just turn them up as far as you can without unacceptable cost elsewhere.
Apr 22 2010
Hello Bill,On Thu, Apr 22, 2010 at 11:27 AM, BCS <none anon.com> wrote:If you have no special reason to use float or double and are not IO or memory space bound, use real. Every machine I know the details of will use it in the FPU so the only general advantage to not using it is IO speed and data size. -- ... <IXOYE><Hello Bill,So what's the advice you would you give to Joe coder about when to use 'real'?Seems to me the only use is to preserve a few more bits in intermediate computations.There are some cases where you simply want to keep as much precision as you can. In those cases variable precision floats aren't any better of a solution as you would just turn them up as far as you can without unacceptable cost elsewhere.
Apr 22 2010
BCS:If you have no special reason to use float or double and are not IO or memory space bound, use real. Every machine I know the details of will use it in the FPU so the only general advantage to not using it is IO speed and data size.Take one of the little ray-tracers written in D from my site, test its running speed compiled with ldc, replace doubles with reals, and time it again. You will see a nice performance difference. LDC FP performance is not good if it doesn't use SSE registers. Bye, bearophile
Apr 22 2010
On Thu, 22 Apr 2010 20:51:44 -0300, bearophile <bearophileHUGS lycos.com> wrote:BCS:Ray-tracers are insanely memory IO bound, not compute bound, bearophile. So what you're seeing is the difference between 80-bits and 64-bits of memory, not the FP performance.If you have no special reason to use float or double and are not IO or memory space bound, use real. Every machine I know the details of will use it in the FPU so the only general advantage to not using it is IO speed and data size.Take one of the little ray-tracers written in D from my site, test its running speed compiled with ldc, replace doubles with reals, and time it again. You will see a nice performance difference. LDC FP performance is not good if it doesn't use SSE registers. Bye, bearophile
Apr 22 2010
Robert Jacques:Ray-tracers are insanely memory IO bound, not compute bound,<From my experiments those little ray-tracers are mostly bound to the time taken by the ray intersection tests.So what you're seeing is the difference between 80-bits and 64-bits of memory, not the FP performance.<It's more like 96 bits with LDC on Ubuntu. Even if you are right, real-life programs often need to process good amounts of memory, so using reals they are slower. In my site there are many benchmarks, not just raytracers. The ancient Whetstone benchmark is not I/O or memory bound. With reals: 1988 MIPS With doubles: 2278 MIPS ldc -O3 -release -inline Compiled with the daily build of ldc, Intel Celeron CPU 2.3 GHz, 32 bit Ubuntu. Bye, bearophile
Apr 23 2010
Hello bearophile,Even if you are right, real-life programs often need to process good amounts of memory, so using reals they are slower.Exactly, that is the one case where float and double are better, where large volumes of data need to be processed, normally in a regular fashion. For cases where small amounts of data are processed and where there is littler opportunity to use vectorization, they have little or no advantage over real. And I suspect that both kinds of code occur with some regularity. -- ... <IXOYE><
Apr 23 2010
BCS wrote:Hello bearophile,A simple rule of thumb: if it's an array, use float or double. If it's not, use real.Even if you are right, real-life programs often need to process good amounts of memory, so using reals they are slower.Exactly, that is the one case where float and double are better, where large volumes of data need to be processed, normally in a regular fashion. For cases where small amounts of data are processed and where there is littler opportunity to use vectorization, they have little or no advantage over real. And I suspect that both kinds of code occur with some regularity.
Apr 23 2010
Don wrote:A simple rule of thumb: if it's an array, use float or double. If it's not, use real.I agree. The only reason to use float or double is to save on storage.
Apr 23 2010
Walter Bright:I agree. The only reason to use float or double is to save on storage.A little D1 program, that I compile with LDC: import tango.stdc.stdio: printf; import tango.stdc.stdlib: atof; alias real FP; void main() { FP x = atof("1.5"); FP y = atof("2.5"); FP xy = x * y; printf("%lf\n", xy); } ldc -O3 -release -inline -output-s temp.d FP = double: _Dmain: subl $36, %esp movl $.str, (%esp) call atof movl $.str1, (%esp) fstpl 24(%esp) call atof fstpl 16(%esp) movsd 24(%esp), %xmm0 mulsd 16(%esp), %xmm0 movsd %xmm0, 4(%esp) movl $.str2, (%esp) call printf xorl %eax, %eax addl $36, %esp ret $8 ------------------------- FP = real: _Dmain: subl $28, %esp movl $.str, (%esp) call atof fstpt 16(%esp) movl $.str1, (%esp) call atof fldt 16(%esp) fmulp %st(1) fstpt 4(%esp) movl $.str2, (%esp) call printf xorl %eax, %eax addl $28, %esp ret $8 If you use the real type you are forced to use X86 FPU, that is very inefficient if used by LDC. Bye, bearophile
Apr 23 2010
bearophile wrote:Walter Bright:Here's the only point in the code where there's difference:I agree. The only reason to use float or double is to save on storage.A little D1 program, that I compile with LDC:FP = double: fstpl 16(%esp) movsd 24(%esp), %xmm0 mulsd 16(%esp), %xmm0 movsd %xmm0, 4(%esp) FP = real: fldt 16(%esp) fmulp %st(1) fstpt 4(%esp) If you use the real type you are forced to use X86 FPU, that is very inefficient if used by LDC.It looks OK to me in this example.
Apr 24 2010
Walter Bright Wrote:Don wrote:There is another reason: performance, when combined with vectorized code. If I use 4 32-bit floats to represent my vectors, points, etc. in my ray tracer, I can stuff them into an SSE register and use intrinsics to really, *really* speed it up. Especially if I use the sum-of-products / structure-of-arrays form for packetizing the data. Now, I realize this is not necessarily possible with D2 currently, but it's not inconceivable that some D2 compiler would get that capability in the relatively near future. If I instead use 8-byte floats, I now have to double my operations and thus much of my processing time (due to only being able to put 2 items into each SSE register). If I use reals, well, I get the x86 FPU, which seriously hampers performance. And when it comes to rendering, performance is a very, very big deal (even in production/offline rendering). -MikeA simple rule of thumb: if it's an array, use float or double. If it's not, use real.I agree. The only reason to use float or double is to save on storage.
Apr 23 2010
Mike Farnsworth wrote:There is another reason: performance, when combined with vectorized code. If I use 4 32-bit floats to represent my vectors, points, etc. in my ray tracer, I can stuff them into an SSE register and use intrinsics to really, *really* speed it up. Especially if I use the sum-of-products / structure-of-arrays form for packetizing the data. Now, I realize this is not necessarily possible with D2 currently, but it's not inconceivable that some D2 compiler would get that capability in the relatively near future. If I instead use 8-byte floats, I now have to double my operations and thus much of my processing time (due to only being able to put 2 items into each SSE register). If I use reals, well, I get the x86 FPU, which seriously hampers performance. And when it comes to rendering, performance is a very, very big deal (even in production/offline rendering).I agree that rendering is different, and likely is a quite different thing than numerical analysis.
Apr 23 2010
Walter Bright Wrote:Don wrote:Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.A simple rule of thumb: if it's an array, use float or double. If it's not, use real.I agree. The only reason to use float or double is to save on storage.
Apr 23 2010
strtr wrote:Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.
Apr 24 2010
Walter Bright Wrote:strtr wrote:No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.
Apr 24 2010
Hello Strtr,Walter Bright Wrote:If you don't know what the algorithms is doing then the types used are part of the algorithm. OTOH, some would argue that Walter is still right by saying that if you don't know what is happening, then you've got a bad algorithm. However you cut it, these cases are by far the minority. -- ... <IXOYE>You've got a bad algorithm if increasing the precision breaks it.No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.
Apr 24 2010
BCS Wrote:Hello Strtr,I'm not sure on when to define a type as the algorithm. 1 ? 3 = .33 Is the type now part of the algorithm?Walter Bright Wrote:If you don't know what the algorithms is doing then the types used are part of the algorithm.You've got a bad algorithm if increasing the precision breaks it.No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.OTOH, some would argue that Walter is still right by saying that if you don't know what is happening, then you've got a bad algorithm.Yes, all algorithms created by genetic programming are bad ;) And your brain is one big bad algorithm as well of course..However you cut it, these cases are by far the minority.But growing.
Apr 24 2010
Hello Strtr,BCS Wrote:Note I didn't say what I thought. (As it happens, I think GA is only valid as a last resort.)OTOH, some would argue that Walter is still right by saying that if you don't know what is happening, then you've got a bad algorithm.Yes, all algorithms created by genetic programming are bad ;)And your brain is one big bad algorithm as well of course..Just because I don't understand an algo doesn't imply that no one does. As for the brain, there are people who don't consider the brain the result of anything remotely like GA. But this is not the place to argue that one...Yeah, but (I hope) they will never come anywhere near the majority. -- ... <IXOYE><However you cut it, these cases are by far the minority.But growing.
Apr 24 2010
BCS Wrote:Hello Strtr,I was talking about GP in specific, but talking about GA in general I would say it could be a great beginning in a lot of scientific research I've seen. Time and time again intuition has been proven a bad gamble in those situations. Especially with the enormous amount of information streaming out of most scientific devices you don't want to manually sift through all the data.BCS Wrote:Note I didn't say what I thought. (As it happens, I think GA is only valid as a last resort.)OTOH, some would argue that Walter is still right by saying that if you don't know what is happening, then you've got a bad algorithm.Yes, all algorithms created by genetic programming are bad ;)What I meant was that I know of nobody who would claim they understand all that is happening in the brain, thus according to your statement making it a bad algorithm. Unrelated of whether it there is a GA basis or not.And your brain is one big bad algorithm as well of course..Just because I don't understand an algo doesn't imply that no one does. As for the brain, there are people who don't consider the brain the result of anything remotely like GA. But this is not the place to argue that one...A majority wouldn't be necessary for such portability to be a valid issue. I think D is in general an awesome language for artificial intelligence which might hint that the percentage such D users could become significant.Yeah, but (I hope) they will never come anywhere near the majority.However you cut it, these cases are by far the minority.But growing.
Apr 24 2010
On 04/24/2010 12:52 PM, strtr wrote:Walter Bright Wrote:I'm not an expert in GA, but I can tell that a neural network that is dependent on precision is badly broken. Any NN's transfer function must be smooth. Andreistrtr wrote:No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.
Apr 24 2010
Andrei Alexandrescu Wrote:On 04/24/2010 12:52 PM, strtr wrote:How can you tell?Walter Bright Wrote:I'm not an expert in GA, but I can tell that a neural network that is dependent on precision is badly broken.strtr wrote:No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.Any NN's transfer function must be smooth.http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Transfer%20Function It wasn't for nothing I mentioned threshold functions Especially in the more complex spiking neural networks bases on dynamical systems, thresholds are kind of important.
Apr 24 2010
On 04/24/2010 04:30 PM, strtr wrote:Andrei Alexandrescu Wrote:Meh. You can't train using a gradient method unless the output is smooth (infinitely derivable). AndreiOn 04/24/2010 12:52 PM, strtr wrote:How can you tell?Walter Bright Wrote:I'm not an expert in GA, but I can tell that a neural network that is dependent on precision is badly broken.strtr wrote:No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.Any NN's transfer function must be smooth.http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Transfer%20Function It wasn't for nothing I mentioned threshold functions Especially in the more complex spiking neural networks bases on dynamical systems, thresholds are kind of important.
Apr 24 2010
Andrei Alexandrescu Wrote:On 04/24/2010 04:30 PM, strtr wrote:Which was exactly why I mentioned evolutionary algorithms.Andrei Alexandrescu Wrote:Meh. You can't train using a gradient method unless the output is smooth (infinitely derivable).On 04/24/2010 12:52 PM, strtr wrote:How can you tell?Walter Bright Wrote:I'm not an expert in GA, but I can tell that a neural network that is dependent on precision is badly broken.strtr wrote:No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.Any NN's transfer function must be smooth.http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Transfer%20Function It wasn't for nothing I mentioned threshold functions Especially in the more complex spiking neural networks bases on dynamical systems, thresholds are kind of important.
Apr 24 2010
On 04/24/2010 05:26 PM, strtr wrote:Andrei Alexandrescu Wrote:So are you saying there are neural networks with thresholds that are trained using evolutionary algorithms instead of e.g. backprop? I found this: https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdf which does seem to support the point. I'd have to give it a closer look to see whether precision would affect training. AndreiOn 04/24/2010 04:30 PM, strtr wrote:Which was exactly why I mentioned evolutionary algorithms.Andrei Alexandrescu Wrote:Meh. You can't train using a gradient method unless the output is smooth (infinitely derivable).On 04/24/2010 12:52 PM, strtr wrote:How can you tell?Walter Bright Wrote:I'm not an expert in GA, but I can tell that a neural network that is dependent on precision is badly broken.strtr wrote:No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.Portability will become more important as evo algos get used more. Especially in combination with threshold functions. The computer will generate/optimize all input/intermediate values itself and executing the program on higher precision machines might give totally different outputs.You've got a bad algorithm if increasing the precision breaks it.Any NN's transfer function must be smooth.http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Transfer%20Function It wasn't for nothing I mentioned threshold functions Especially in the more complex spiking neural networks bases on dynamical systems, thresholds are kind of important.
Apr 24 2010
Andrei Alexandrescu Wrote:So are you saying there are neural networks with thresholds that are trained using evolutionary algorithms instead of e.g. backprop? I found this:The moment a network is just a bit recurrent, any gradient descent algo will be a hell.https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdf which does seem to support the point. I'd have to give it a closer look to see whether precision would affect training.I would love to see your results :) But even in the basic 3 layer sigmoid network the question is: Will two outputs which are exactly the same(for a certain input) stay the same if you change the precision. When the calculations leading up to the two outputs are totally different ( for instance fully dependent on separated subsets of the input; separated paths), changing the precision could influence them differently leading to different outputs ?
Apr 24 2010
On 04/24/2010 07:21 PM, strtr wrote:Andrei Alexandrescu Wrote:which does seem to support the point. I'd have to give it a closer lookSo are you saying there are neural networks with thresholds that are trained using evolutionary algorithms instead of e.g. backprop? I found this:The moment a network is just a bit recurrent, any gradient descent algo will be a hell.https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdfYou shouldn't care.to see whether precision would affect training.I would love to see your results :) But even in the basic 3 layer sigmoid network the question is: Will two outputs which are exactly the same(for a certain input) stay the same if you change the precision.When the calculations leading up to the two outputs are totally different ( for instance fully dependent on separated subsets of the input; separated paths), changing the precision could influence them differently leading to different outputs ?I'm not sure about that. Fundamentally all learning relies on some smoothness assumption - at a minimum, continuity of the transfer function (small variation in input leads to small variation in output). I'm sure certain oddities could be derived from systems that impose discontinuities, but by and large I think those aren't all that interesting. The case you mention above involves a NN making a different end discrete classification decision because numeric vagaries led to some threshold being met or not. I have certainly seen that happening - even changing the computation method (e.g. unrolling loops) will lead to different individual results. But that doesn't matter; statistically the neural net will behave the same. Andrei
Apr 24 2010
Andrei Alexandrescu Wrote:On 04/24/2010 07:21 PM, strtr wrote:Why do you think this? Because I'm pretty sure I do care about this. Part of my research involves trained networks making only a few decisions and those decisions should stay the same for all users.Andrei Alexandrescu Wrote:which does seem to support the point. I'd have to give it a closer lookSo are you saying there are neural networks with thresholds that are trained using evolutionary algorithms instead of e.g. backprop? I found this:The moment a network is just a bit recurrent, any gradient descent algo will be a hell.https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdfYou shouldn't care.to see whether precision would affect training.I would love to see your results :) But even in the basic 3 layer sigmoid network the question is: Will two outputs which are exactly the same(for a certain input) stay the same if you change the precision.No. You could maybe say you want small variations in the network to lead to small variations in the output. But I wouldn't even limit myself to those systems. Almost anything a bit more complex that the standard feed forward network can magnify small changes and even the standard network relies on large differences between the weights; what is a small change for one input might be a enormous change for another.When the calculations leading up to the two outputs are totally different ( for instance fully dependent on separated subsets of the input; separated paths), changing the precision could influence them differently leading to different outputs ?I'm not sure about that. Fundamentally all learning relies on some smoothness assumption - at a minimum, continuity of the transfer function (small variation in input leads to small variation in output).I'm sure certain oddities could be derived from systems that impose discontinuities, but by and large I think those aren't all that interesting.A lot of the more recent research is done in spiking neural networks; dynamical systems with lots of bifurcations. I wouldn't say those are not that interesting. But then again, who am I? :PThe case you mention above involves a NN making a different end discrete classification decision because numeric vagaries led to some threshold being met or not.No, the numeric discrepancy I suggested would lie in the different rounding calculations 1.2 * 3 = x1 6/3 + 3.2/2 = x2 If for a certain precision x1 == x2, will this then hold for all precisions?I have certainly seen that happening - even changing the computation method (e.g. unrolling loops) will lead to different individual results.I don't care about that, only portability after compilation is what I'm after.But that doesn't matter; statistically the neural net will behave the same.As I said, statistically is not what I am after and in practice a NN will barely ever get such nice normal inputs that statistics can say anything about the workings of it.
Apr 25 2010
strtr Wrote:Don't worry, lad. Not everyone has or even needs to have the skill to be an expert in all fields of computer science and make such bold statements, only a few of us (excluding me, naturally). You can also live a happy life as an average or lead developer without ascending to the demigod level.I'm sure certain oddities could be derived from systems that impose discontinuities, but by and large I think those aren't all that interesting.A lot of the more recent research is done in spiking neural networks; dynamical systems with lots of bifurcations. I wouldn't say those are not that interesting. But then again, who am I? :P
Apr 25 2010
On 04/25/2010 11:50 AM, larry coder wrote:strtr Wrote:On a different vein, I'm a fan of disclosing true identity of posters. I've staunchly done that ever since my first post on the Usenet, and have never been sorry. When I attended my first conference I was already notorious following my posts on comp.lang.c++.moderated. It has definitely helped my career. People who end up making solid contributions to D do end up mentioning their identity, and it would be very nice to e.g. take a look at strtr's work on spiking neural networks (of which I know nothing about) so I get better insights into what (s)he's talking about. AndreiDon't worry, lad. Not everyone has or even needs to have the skill to be an expert in all fields of computer science and make such bold statements, only a few of us (excluding me, naturally). You can also live a happy life as an average or lead developer without ascending to the demigod level.I'm sure certain oddities could be derived from systems that impose discontinuities, but by and large I think those aren't all that interesting.A lot of the more recent research is done in spiking neural networks; dynamical systems with lots of bifurcations. I wouldn't say those are not that interesting. But then again, who am I? :P
Apr 25 2010
Andrei Alexandrescu Wrote:On a different vein, I'm a fan of disclosing true identity of posters. I've staunchly done that ever since my first post on the Usenet, and have never been sorry. When I attended my first conference I was already notorious following my posts on comp.lang.c++.moderated. It has definitely helped my career. People who end up making solid contributions to D do end up mentioning their identity, and it would be very nice to e.g. take a look at strtr's work on spiking neural networks (of which I know nothing about) so I get better insights into what (s)he's talking about.I would recommend Izhikevich's work. http://www.izhikevich.org/publications/dsn/index.htm Chapter one is good enough to get a feel for the problems involved.
Apr 25 2010
larry coder Wrote:strtr Wrote:Which was why Andrei and I were having a discussion. He is the expert on the D language and I wanted to correct some of his harsh statements on A.I. because that is where I use D.Don't worry, lad. Not everyone has or even needs to have the skill to be an expert in all fields of computer science and make such bold statements, only a few of us (excluding me, naturally).I'm sure certain oddities could be derived from systems that impose discontinuities, but by and large I think those aren't all that interesting.A lot of the more recent research is done in spiking neural networks; dynamical systems with lots of bifurcations. I wouldn't say those are not that interesting. But then again, who am I? :PYou can also live a happy life as an average or lead developer without ascending to the demigod level.And you could also not have happiness as a goal.
Apr 25 2010
strtr wrote:Walter Bright Wrote:You're going to have nothing but trouble with such a program. It won't be portable even on Java, and it may also exhibit different behavior based on compiler switch settings. It's like relying on the lense in your camera to be of poor quality. Can you imagine going to the camera store and saying "I don't want the newer, high quality lenses, I want your old fuzzy one!" ? I suggest instead using fixed point arithmetic with a 64 bit integer type.You've got a bad algorithm if increasing the precision breaks it.No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.
Apr 24 2010
Walter Bright Wrote:strtr wrote:Most of the training will be done on the user's computer which would nullify all these problems. Only when someone wants to use their trained program on another computer might problems arise.Walter Bright Wrote:You're going to have nothing but trouble with such a program. It won't be portable even on Java, and it may also exhibit different behavior based on compiler switch settings.You've got a bad algorithm if increasing the precision breaks it.No, I don't. All algorithms using threshold functions which have been generated using evolutionary algorithms will break by changing the precision. That is, you will need to retrain them. The point of most of these algorithms(eg. neural networks) is that you don't know what is happening in it.It's like relying on the lense in your camera to be of poor quality. Can you imagine going to the camera store and saying "I don't want the newer, high quality lenses, I want your old fuzzy one!" ? I suggest instead using fixed point arithmetic with a 64 bit integer type.Is there no way to stay within float standards? It only needs to be portable over x86
Apr 24 2010
strtr wrote:Is there no way to stay within float standards? It only needs to be portable over x86It is standard IEEE 754 floating point.
Apr 24 2010
Walter Bright Wrote:strtr wrote:Most math functions I see in std.math take reals as input. Should I use the C variants in stead?Is there no way to stay within float standards? It only needs to be portable over x86It is standard IEEE 754 floating point.
Apr 24 2010
strtr wrote:Walter Bright Wrote:If you're content with inaccurate and non-portable answers. Don and I have discovered that many C standard library implementations don't even come up to the precision of the parameter types. If you're relying on the various C standard library functions to give exactly the same answers, you'll be disappointed :-(strtr wrote:Most math functions I see in std.math take reals as input. Should I use the C variants in stead?Is there no way to stay within float standards? It only needs to be portable over x86It is standard IEEE 754 floating point.
Apr 24 2010
Hello Strtr,Is there no way to stay within float standards? It only needs to be portable over x86I'm not sure even x86 /requiters/ bit perfect FP math across different models. And I know for sure that you can't count on the compiler not moving stuff around. The only way to absolutely fix the way the math is done is ASM and then who cares what the language support? -- ... <IXOYE><
Apr 24 2010
BCS Wrote:Hello Strtr,I'm not really searching for perfect/fixed math, but that the math is consistent on different x86 hardware after compilation. Is this possible?Is there no way to stay within float standards? It only needs to be portable over x86I'm not sure even x86 /requiters/ bit perfect FP math across different models. And I know for sure that you can't count on the compiler not moving stuff around. The only way to absolutely fix the way the math is done is ASM and then who cares what the language support?
Apr 24 2010
strtr wrote:I'm not really searching for perfect/fixed math, but that the math is consistent on different x86 hardware after compilation. Is this possible?Yes, but you'll have to avoid the math functions if you're using C ones. The D ones should give the same results.
Apr 24 2010
Walter Bright Wrote:strtr wrote:As they are dynamic linked?I'm not really searching for perfect/fixed math, but that the math is consistent on different x86 hardware after compilation. Is this possible?Yes, but you'll have to avoid the math functions if you're using C ones.The D ones should give the same results.So as long as I only use floats (and doubles) and use the std.math functions I should be save over x86 ? This would mean a lot to me :D Why do all the std.math functions state reals as arguments?
Apr 24 2010
strtr wrote:Walter Bright Wrote:No, because the implementations of them vary from platform & compiler to platform & compiler. This is whether they are static or dynamically linked.strtr wrote:As they are dynamic linked?I'm not really searching for perfect/fixed math, but that the math is consistent on different x86 hardware after compilation. Is this possible?Yes, but you'll have to avoid the math functions if you're using C ones.Given the same arguments, you'll get the same results on every D platform. But the calculation of the argument values can vary.The D ones should give the same results.So as long as I only use floats (and doubles) and use the std.math functions I should be save over x86 ?This would mean a lot to me :D Why do all the std.math functions state reals as arguments?Because they are designed for maximum precision.
Apr 24 2010
Walter Bright Wrote:strtr wrote:O.k. stay away from std.c.math :)Walter Bright Wrote:No, because the implementations of them vary from platform & compiler to platform & compiler. This is whether they are static or dynamically linked.strtr wrote:As they are dynamic linked?I'm not really searching for perfect/fixed math, but that the math is consistent on different x86 hardware after compilation. Is this possible?Yes, but you'll have to avoid the math functions if you're using C ones.yay :)Given the same arguments, you'll get the same results on every D platform.The D ones should give the same results.So as long as I only use floats (and doubles) and use the std.math functions I should be save over x86 ?But the calculation of the argument values can vary.I'm not sure I understand what that means, calculations of the arguments. Could you give an example of a calculation of an argument?I always thought that D supplied extra math functions on top of the C ones to support reals..This would mean a lot to me :D Why do all the std.math functions state reals as arguments?Because they are designed for maximum precision.
Apr 25 2010
strtr wrote:x = sin(a + b); a+b is a calculation of the argument.But the calculation of the argument values can vary.I'm not sure I understand what that means, calculations of the arguments. Could you give an example of a calculation of an argument?
Apr 25 2010
Walter Bright Wrote:strtr wrote:Ok, by vary you meant vary at compilation on different platforms. Not after compilation on different platforms or should I really avoid doing argument calculations. This should be on .learn..x = sin(a + b); a+b is a calculation of the argument.But the calculation of the argument values can vary.I'm not sure I understand what that means, calculations of the arguments. Could you give an example of a calculation of an argument?
Apr 25 2010
Don:A simple rule of thumb: if it's an array, use float or double. If it's not, use real.The rule of thumb from me is: if you care for performance never use real type. Bye, bearophile
Apr 23 2010
Hello bearophile,Don:Has anyone actually teased to show that the vector ops are actually faster than the FPU for cases where there is nothing to vectorize? -- ... <IXOYE><A simple rule of thumb: if it's an array, use float or double. If it's not, use real.The rule of thumb from me is: if you care for performance never use real type.
Apr 24 2010
Hello bearophile,BCS:Ray tracing is very amiable to that sort of optimization, particularly if you are carful about how you write things. Also, I suspect that most cases where SSE is able to make a major difference will spend a lot of time moving large arrays through the CPU so they will have memory size and IO bottleneck concerns, exactly the cases I excepted. -- ... <IXOYE><If you have no special reason to use float or double and are not IO or memory space bound, use real. Every machine I know the details of will use it in the FPU so the only general advantage to not using it is IO speed and data size.Take one of the little ray-tracers written in D from my site, test its running speed compiled with ldc, replace doubles with reals, and time it again. You will see a nice performance difference. LDC FP performance is not good if it doesn't use SSE registers.
Apr 22 2010
bearophile wrote:I suggest to remove the real type from D2 because:D being a systems programming language should give access to the types supported by the CPU. If you don't like real, don't use it! It's not hard to avoid. Furthermore, reals are supported by gcc and dmc, and part of D's mission is to interoperate with C's data types.
Apr 21 2010
On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -sk Walter Bright wrote:bearophile wrote:I suggest to remove the real type from D2 because:D being a systems programming language should give access to the types supported by the CPU. If you don't like real, don't use it! It's not hard to avoid. Furthermore, reals are supported by gcc and dmc, and part of D's mission is to interoperate with C's data types.
Apr 21 2010
abcd Wrote:On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 21 2010
strtr wrote:abcd Wrote:With numerical work, I suggest getting the correct answer is preferable <g>. Having lots of bits makes it more likely you'll get the right answer. Yes, it is possible to get correct answers with low precision, but it requires an expert and the techniques are pretty advanced.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 21 2010
Walter Bright Wrote:strtr wrote:The funny thing is that getting the exact correct answer is not that big of a deal. I would give a few bits of imprecision for portability over x86abcd Wrote:With numerical work, I suggest getting the correct answer is preferable <g>. Having lots of bits makes it more likely you'll get the right answer. Yes, it is possible to get correct answers with low precision, but it requires an expert and the techniques are pretty advanced.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 22 2010
strtr wrote:Walter Bright Wrote:In my experience doing numerical work, loss of a "few bits" of precision can have order of magnitude effects on the result. The problems is the accumulation of roundoff errors. Using more bits of precision is the easiest solution, and is often good enough. In Java's early days, they went for portability of floating point over precision. Experience with this showed it to be a very wrong tradeoff, no matter how good it sounds. Having your program produce the crappiest, least accurate answer despite buying a powerful fp machine just because there exists some hardware somewhere that does a crappy floating point job is just not acceptable. It'd be like buying a Ferrari and having it forcibly throttled back to VW bug performance.strtr wrote:The funny thing is that getting the exact correct answer is not that big of a deal. I would give a few bits of imprecision for portability over x86abcd Wrote:With numerical work, I suggest getting the correct answer is preferable <g>. Having lots of bits makes it more likely you'll get the right answer. Yes, it is possible to get correct answers with low precision, but it requires an expert and the techniques are pretty advanced.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 22 2010
Walter Bright Wrote:In my experience doing numerical work, loss of a "few bits" of precision can have order of magnitude effects on the result. The problems is the accumulation of roundoff errors. Using more bits of precision is the easiest solution, and is often good enough.My work is probably not classified as numerical work as I don't much care about the results. I only care about solutions solving the problem. like this: x * 1.2 = 9; I don't care what x should be for this calculation to be 9, as long as there is a x which satisfies the calculation (or does so close enough). What does interest me is that the x found would yield the same result on another computer because as you say; errors accumulate.In Java's early days, they went for portability of floating point over precision. Experience with this showed it to be a very wrong tradeoff, no matter how good it sounds. Having your program produce the crappiest, least accurate answer despite buying a powerful fp machine just because there exists some hardware somewhere that does a crappy floating point job is just not acceptable. It'd be like buying a Ferrari and having it forcibly throttled back to VW bug performance.More like creating the best ever seating for a VW bug and then expecting it to be even better in the Ferrari; it might be, but most probably, it won't.
Apr 22 2010
On Wed, 21 Apr 2010 23:48:20 -0300, strtr <strtr spam.com> wrote:abcd Wrote:You do realize that the x86 floating point unit _always_ promotes floats and doubles to reals internally? The only way around it is for the compiler to use MMX/SSE unit for everything instead.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 21 2010
Robert Jacques Wrote:On Wed, 21 Apr 2010 23:48:20 -0300, strtr <strtr spam.com> wrote:Does this mean that float calculations are always off between intel and amd as intel uses 80bit reals? (x86 is my target audience)abcd Wrote:You do realize that the x86 floating point unit _always_ promotes floats and doubles to reals internally? The only way around it is for the compiler to use MMX/SSE unit for everything instead.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 22 2010
strtr wrote:Robert Jacques Wrote:No, I believe AMD processors also use 80 bits of precision, since they also implement the x86/x87 instruction set. -LarsOn Wed, 21 Apr 2010 23:48:20 -0300, strtr <strtr spam.com> wrote:Does this mean that float calculations are always off between intel and amd as intel uses 80bit reals? (x86 is my target audience)abcd Wrote:You do realize that the x86 floating point unit _always_ promotes floats and doubles to reals internally? The only way around it is for the compiler to use MMX/SSE unit for everything instead.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 22 2010
Lars T. Kyllingstad Wrote:strtr wrote:Thanks, hoped as much. Yay standards :)Does this mean that float calculations are always off between intel and amd as intel uses 80bit reals? (x86 is my target audience)No, I believe AMD processors also use 80 bits of precision, since they also implement the x86/x87 instruction set. -Lars
Apr 22 2010
"Robert Jacques" <sandford jhu.edu> wrote in message news:op.vbjul3ov26stm6 sandford.myhome.westell.com...On Wed, 21 Apr 2010 23:48:20 -0300, strtr <strtr spam.com> wrote:You can set the internal precision of the x87 unit to 32, 64 or 80 bits, it just defaults to 80, and as there's little if any performance difference between the 3 modes, thats how it's usualy set.abcd Wrote:You do realize that the x86 floating point unit _always_ promotes floats and doubles to reals internally? The only way around it is for the compiler to use MMX/SSE unit for everything instead.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 22 2010
Bob Jones wrote:You can set the internal precision of the x87 unit to 32, 64 or 80 bits, it just defaults to 80, and as there's little if any performance difference between the 3 modes, thats how it's usualy set.Despite those settings, the fpu still holds intermediate calculations to 80 bits. The only way to get it to round to the lower precision is to write it out to memory then read it back in. This, of course, is disastrously slow. There's no good reason to "dumb down" floating point results to lower precision unless you're writing a test suite.
Apr 22 2010
"Walter Bright" <newshound1 digitalmars.com> wrote in message news:hqq3qv$2sp5$2 digitalmars.com...Bob Jones wrote:Not true. If you load from memory it will keep the precision of what it loads, but the results of any calculations will be rounded to the lower precision. For example... ==== long double a = 1.0/3.0; long double b = 0.0; SetCtrlWord(GetCtrlWord() & 0xFCFF); // Set single precision __asm { FLD [a] // ST(0) == +3.3333333333333331e-0001 FADD [b] // ST(0) == +3.3333334326744079e-0001 FSTP ST(0) }You can set the internal precision of the x87 unit to 32, 64 or 80 bits, it just defaults to 80, and as there's little if any performance difference between the 3 modes, thats how it's usualy set.Despite those settings, the fpu still holds intermediate calculations to 80 bits. The only way to get it to round to the lower precision is to write it out to memory then read it back in. This, of course, is disastrously slow.
Apr 23 2010
Bob Jones wrote:"Walter Bright" <newshound1 digitalmars.com> wrote in message news:hqq3qv$2sp5$2 digitalmars.com...Then there was something else that wasn't rounded down, as the Java people found out.Bob Jones wrote:Not true. If you load from memory it will keep the precision of what it loads, but the results of any calculations will be rounded to the lower precision. For example... ==== long double a = 1.0/3.0; long double b = 0.0; SetCtrlWord(GetCtrlWord() & 0xFCFF); // Set single precision __asm { FLD [a] // ST(0) == +3.3333333333333331e-0001 FADD [b] // ST(0) == +3.3333334326744079e-0001 FSTP ST(0) }You can set the internal precision of the x87 unit to 32, 64 or 80 bits, it just defaults to 80, and as there's little if any performance difference between the 3 modes, thats how it's usualy set.Despite those settings, the fpu still holds intermediate calculations to 80 bits. The only way to get it to round to the lower precision is to write it out to memory then read it back in. This, of course, is disastrously slow.
Apr 23 2010
i am for high numerical acuracy (as high as possible). i support "real" however: - maybe a better name is desirable: i work a lot with complex numbers and "real" and "imaginary" have different meanings for me. i would call it "continuous" or "accurate" or "precision" type - could we *add* a 128-bit type (eg float128)? maybe not through the compiler, but through the standard library? or even more accurate type... -finnaly, i would like standard (either through compiler or the library) aliases for types based on the following properties: "size" (8, 32, 64 bits etc.), "positiveness" (unsigned or signed) and "discreteness" (the intendend one, since implementation is always discrete) such as "continuous" or "discrete"). Examples: u8d for byte (unsigned-8bits-discrete), u16d for uint, but s64c for double. Ok, better names could be imagined... They would avoid implementation "deviances" or "extensions". And, more, if one is interested in accuracy, he should use the "real" (or, as I said, the "continuous" type). -(runtime) warnings should be risen for continuous types if machine precision limits are touched. - I don't like underscores in "hidden" types. Neither in __declspec and so on. They are a testimonies for incapability of reaching (common-sense) consensus.
Apr 22 2010
On Thu, 22 Apr 2010 15:58:06 -0300, eles <eles eles.com> wrote: [snip]- could we *add* a 128-bit type (eg float128)? maybe not through the compiler, but through the standard library? or even more accurate type...This is called a quad in IEEE nomenclature. There's also a half. And you can define a usable half at least by using a patched version of std.numeric.CustomFloat (bug 3520)
Apr 22 2010
no matter the name. it is the accuracy that matters. so: could we have it in D?
Apr 22 2010
On Thu, 22 Apr 2010 01:52:41 -0300, Robert Jacques <sandford jhu.edu> wrote:On Wed, 21 Apr 2010 23:48:20 -0300, strtr <strtr spam.com> wrote:P.S. An implication of this is that using any type other than real results in inconsistent truncation depending on where/when any compiler stores intermediate results outside of the fp.abcd Wrote:You do realize that the x86 floating point unit _always_ promotes floats and doubles to reals internally? The only way around it is for the compiler to use MMX/SSE unit for everything instead.On the other hand, being an engineer, I use the reals all the time and want them to stay. I would use the max precision supported by the cpu then fixed precision like double any day. -skFor me it's the exact opposite, reproducibility/portability is key. My problem with real is that I am always afraid my floats get upgraded to them internally somewhere/somehow.
Apr 22 2010
- Its length is variable across operating systems, it can be 10, 12, 16 bytes, or even just 8 if they get implemented with doubles. The 12 and 16 bytes long waste space.across hardware systems - its not an operating system thing 80bit is the native size of x86 fpu- Results that you can find with programs written in other languages are usually computed with just floats or doubles. If I want to test if a D program gives the same results I can't usereals in D.- I don't see reals (long doubles in C) used much in other languages.but you can't port delphi extended type (since delphi2 i think), gcc suports it, llvm supports it, assembler "support" it, borland and intel compiler supports it- Removing a built-in type makes the language and its manual a little simpler.it doesn't change the codegeneration that much (ok, ok there are some fpu instructions that are not fully equal to double precision behaviour) but also for more than 15 years now- I have used D1 for some time, but so far I have had hard time to find a purpose for 80 bit FP numbers. The slight increase in precision is not so useful. - D implementations are free to use doubles to implement the real type. So in a D program I can't rely on their little extra precision, making them not so useful.but it is an 80bit precision feature in hardware - why should i use an software based solution - if 80bits are enough for me btw: the precision lost while switching between fpu stack and the D data space is better- While I think the 80 bit FP are not so useful, I think Quadrupe precision FP (128 bit, currently usually software-implemented) can be useful for some situations,(http://en.wikipedia.org/wiki/Quadruple_precision ).They might be useful for High dynamic range imaging too. LLVM SPARC V9will support its quad-precision registers. sounds a little bit like: lets throw away the byte type - because we can do better things with int- The D2 specs say real is the "largest hardware implemented floating point size", this means that they can be 128 bit too in future. A numerical simulation that is designed to work with 80 bit FP numbers (or 64 bit FP numbers) cangive strange results with 128 bit precision. ok now we got 32bit, 64bit and 80bit in hardware - that will (i hope) become 32bit,64bit,80bit,128bit,etc... but why should we throw away real - maybe we should alias it to float80 or something - and later there will be an float128 etc.So I suggest to remove the real type; or eventually replace it with fixed-sized 128 bit floating-point type with the same name (implemented using a software emulation where the hardware doesn't have them, like the __float128 of GCC: http://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html ). In far future, if the hardware of CPUs will support FP numbers largerthan 128 bits, a larger type can be added if necessary. why should we throw away direct hardware support - isn't it enough to add your software/hardware float128? and all the others btw: the 80bit code-generator part is much smaller/simpler in code than your 128bit software based impl
Apr 21 2010
dennis luehring:If you change the OS, on the same hardware, you have different representation length inside structs. And they can waste lot of space.- Its length is variable across operating systems, it can be 10, 12, 16 bytes, or even just 8 if they get implemented with doubles. The 12 and 16 bytes long waste space.across hardware systems - its not an operating system thing 80bit is the native size of x86 fpubut you can't port delphi extended type (since delphi2 i think), gcc suports it, llvm supports it, assembler "support" it, borland and intel compiler supports itModern hardware doesn't support it. I think hardware will win. So far I have never had to port code from C/Delphi/C++ that requires the 80 bit FP type, but this means nothing. Do you know software that needs it? With LDC/Clang their support for the X86 FPU is very bad, you essentially use the SSE registers only unless you don't care for performance. You can try some benchmarks, or you can look at the asm produced.but it is an 80bit precision feature in hardware - why should i use an software based solution - if 80bits are enough for meYou have software where 64 bit FPs are not enough, but 80 bits are exactly enough (and you don't need quad precision).sounds a little bit like: lets throw away the byte type - because we can do better things with intIn D the byte is signed, and it's quite less commonly useful than ubyte. If you call them sbyte and ubyte you can tell apart better. Your analogy doesn't hold a lot, because ubyte is a fundamental data type, you can build others with it. While real is not so fundamental. There are languages that indeed throw away the ubyte and essentially keep integers only (scripting languages). You have to see the performance of the programs compiled with the last JIT for Lua language.ok now we got 32bit, 64bit and 80bit in hardware - that will (i hope) become 32bit,64bit,80bit,128bit,etc... but why should we throw away real - maybe we should alias it to float80 or something - and later there will be an float128 etc.With the current DMD specs, if CPUs add 128 bit FP the real type will become 128 bit, and you will lose all the possibility to use 80 bit FP from D, because they are defined as the longest FP type supported by the hardware. This is positive, in a way, automatic deprecation for free :-)btw: the 80bit code-generator part is much smaller/simpler in code than your 128bit software based implRight. But I think libc6 contains their implementation. I don't know about Windows. From Walter's answer it seems the real type will not be removed. It's useful for compatibility with uncommon software that uses them. And if someday the hardware 128 bit FP will come out, D will replace the real with it. Thank you for all your comments. Bye, bearophile
Apr 22 2010
bearophile wrote:If you change the OS, on the same hardware, you have different representation length inside structs. And they can waste lot of space.You're right that the amount of 0 padding changes between language implementations (not the OS), but the actual bits used in calculations stays the same, because after all, it's done in the hardware.
Apr 22 2010
If you change the OS, on the same hardware, you have different representation length inside structs. And they can waste lot of space.the os can't change your representation length inside structs - or what do you mean?Modern hardware doesn't support it. I think hardware will win.which modern x86 hardware do not support the 80bit fpu type?So far I have never had to port code from C/Delphi/C++ that requires the 80 bit FP type, but this means nothing. Do you know software that needs it?i've got an ~1mio lines of delphi/asm code project right in front of me doing simulation - and i've got problems to convert the routines over to c/c++ i need to check every simple case over and overYou have software where 64 bit FPs are not enough, but 80 bits are exactly enough (and you don't need quad precision).not exactly, but betterWith the current DMD specs, if CPUs add 128 bit FP the real type will become 128 bit, and you will lose all the possibility to use 80 bit FP from Dbut a simple change or alias isn't helpful? maybe float80 or something like that?what implementation? of the software 128bit?btw: the 80bit code-generator part is much smaller/simpler in code than your 128bit software based implRight. But I think libc6 contains their implementation. I don't know about Windows.
Apr 22 2010
dennis luehring:the os can't change your representation length inside structs - or what do you mean?<I was wrong, Walter has given the right answer. The padding is compiler-specific.which modern x86 hardware do not support the 80bit fpu type?<Current SSE registers, future AVX registers, all/most GPUs of the present. And take a look at the answer written by Don regarding 64 bit CPUs. LLVM doesn't like X86 FPU. Probaby 80 bit FPU type is not going away soon, but I don't think it's the future :-)what implementation? of the software 128bit?<Yep. Bye, bearophile
Apr 22 2010
bearophile wrote:I suggest to remove the real type from D2 because: - It's the only native type that has not specified length. I'm sold on the usefulness of having defined length types.The immediate problem is, that x87 does not fully support floats and doubles. It ONLY supports 80-bit reals. Personally, I think it would be better if it were called real80 (even __real80), and if std.object contained alias real80 real; The reason for this, is that it's quite reasonable for a 64-bit x86 compiler to use 64-bit reals, and use SSE2 exclusively. However, even then, you want real80 to still be available.
Apr 22 2010
Hello bearophile,I suggest to remove the real type from D2 because:- Results that you can find with programs written in other languages are usually computed with just floats or doubles. If I want to test if a D program gives the same results I can't use reals in D.You can if you include error bounds. IIRC the compiler is free to do a lot of optimizations and for FP, that can result in inexact matches (keep in mind that some (or is it all) FPUs do ALL internal math in 80bit so what intermediate values if any are converted through 64/32bit can make a difference in the result even for the other types) so you can't even avoid error bounds on doubles or floats.- I don't see reals (long doubles in C) used much in other languages.It can be asserted that this is a result of them not being easy to use.Five or ten years from now most numerical programs will probably not use 80 bit FP.That depends on what they are doing. People who really care about accuracy will only dump 80bit FP for 128bit FP (quad).- I have used D1 for some time, but so far I have had hard time to find a purpose for 80 bit FP numbers. The slight increase in precision is not so useful.How much hard core number crunching do you do?- D implementations are free to use doubles to implement the real type. So in a D program I can't rely on their little extra precision, making them not so useful.No, a conforming D implementation *must* implement real with the largest HW FP type available.- The D2 specs say real is the "largest hardware implemented floating point size", this means that they can be 128 bit too in future. A numerical simulation that is designed to work with 80 bit FP numbers (or 64 bit FP numbers) can give strange results with 128 bit precision.Can use support that? Aside from a few cases where you are effectivly bit twiddling, code designed to run in 80bit should work at least as well with 128bit.So I suggest to remove the real type; or eventually replace it with fixed-sized 128 bit floating-point type with the same name (implemented using a software emulation where the hardware doesn't have them, like the __float128 of GCC: http://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html ). In far future, if the hardware of CPUs will support FP numbers larger than 128 bits, a larger type can be added if necessary.I think that real should be keep as is. Most of your points can be addressed by adding a new type that is defined to be the current 80bit type. That way, if the programer want to force a given size, they can, and if they just want the biggest type they can get they can do that as well. -- ... <IXOYE><
Apr 22 2010
BCS:How much hard core number crunching do you do?<I do FP numeric processing only once in a while, not much. Steven Schveighoffer:I also find bearophile's requirement to be able to check that a D program outputs the same exact floating point digits as a C program ridiculous.<I have done it some times when I translate code from C to D, hoping to use it as a way to test for translation errors, but you are right, different compiler optimizations can produce different results even if the programs use the same data type (like double). Walter Brightthe amount of 0 padding changes between language implementations (not the OS),<Right, on Win32, GDC allocates 12 bytes for the real type, while DMD allocates 10 bytes. As usual what this thread mostly shows is how ignorant I am. My hope is that if I keep learning, eventually I will become able to help D development. Bye and thank you, bearophile
Apr 22 2010