www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - A little of Partial Compilation

reply bearophile <bearophileHUGS lycos.com> writes:
One simple example of partial compilation: if the D compiler sees that some
parameter N of a function is a known constant at compile time (or it can be
computed at compile time by other means) it may automatically turn the function
into a function template that takes N as a compilation constant, and this may
create an actual compiled function that is simper/faster:

foo(int N, float M) => foo(int N)(float M)
automatically if N is a known constant at compile time.

One disadvantage of this suggestion is that in some situations it can produce
more code to be compiled, and the resulting compiled code may require more I/O
traffic on the code L1 cache, and this may slow down the running a little.

Bye,
bearophile
Aug 19 2008
next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:g8ek6e$13qj$1 digitalmars.com...
 One simple example of partial compilation: if the D compiler sees that 
 some parameter N of a function is a known constant at compile time (or it 
 can be computed at compile time by other means) it may automatically turn 
 the function into a function template that takes N as a compilation 
 constant, and this may create an actual compiled function that is 
 simper/faster:

 foo(int N, float M) => foo(int N)(float M)
 automatically if N is a known constant at compile time.

 One disadvantage of this suggestion is that in some situations it can 
 produce more code to be compiled, and the resulting compiled code may 
 require more I/O traffic on the code L1 cache, and this may slow down the 
 running a little.

 Bye,
 bearophile
I think this is what Walter was (is?) going for with static params. Yes, it means you have to do it manually, but still, it's nice to be able to offer an optimized version of a function for certain values for parameters. If you're not familiar with the proposal, it's something like this: int foo(static int N, float M) { // here N is a compile-time constant } foo(5, 3.14) // similar to foo!(5)(3.14) Partial application is something that doesn't seem to jive with the rest of D but it'd be nice if there were some way to do it. Like if it could be done as an optional optimization, maybe just on pure functions.
Aug 19 2008
parent reply bearophile <bearophileHUGS lycos.com> writes:
Jarrett Billingsley:
 Yes, it means you have to do it manually, but still, it's nice to be able
 to offer an optimized version of a function for certain values for parameters.
Making this "conversion" automatic can be useful because as time passes the D compiler may gain more capabilities to run things at compile time (like I think trigonometric functions, recently, in D 2.x), this means that more values can become run time constants (even ones that the programmer doesn't know can be run time constants), this is an opportunity to optimize some functions more like I have said. Bye, bearophile
Aug 19 2008
parent reply Don <nospam nospam.com.au> writes:
bearophile wrote:
 Jarrett Billingsley:
 Yes, it means you have to do it manually, but still, it's nice to be able
 to offer an optimized version of a function for certain values for parameters.
Making this "conversion" automatic can be useful because as time passes the D compiler may gain more capabilities to run things at compile time (like I think trigonometric functions, recently, in D 2.x), this means that more values can become run time constants (even ones that the programmer doesn't know can be run time constants), this is an opportunity to optimize some functions more like I have said.
The original proposal was mine. But I think automatic conversion (1) would really kill compilation times; and (2) is hardly ever going to be desirable. Execution speed hardly ever matters. The time when it is really a killer feature is for something like a regexp, where you can give a compile-time error if the regexp is invalid, and precompile it if it is OK. Note that even if all parameters to a function are known at compile time, doesn't mean it's sensible to run it at compile time. (Example: calculating pi to fifty billion decimal places). The programmer needs to annotate it somehow to tell the compiler that is suitable for compile time.
Aug 20 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don wrote:
 Note that even if all parameters to a function are known at compile 
 time, doesn't mean it's sensible to run it at compile time. (Example: 
 calculating pi to fifty billion decimal places). The programmer needs to 
 annotate it somehow to tell the compiler that is suitable for compile time.
It's one of those impossible problems for the compiler to predict if it should attempt to execute a function at compile time or not. That's why compile time function evaluation is initiated with a specific syntax, rather than tried in general.
Aug 20 2008
parent reply JAnderson <ask me.com> writes:
Walter Bright wrote:
 Don wrote:
 Note that even if all parameters to a function are known at compile 
 time, doesn't mean it's sensible to run it at compile time. (Example: 
 calculating pi to fifty billion decimal places). The programmer needs 
 to annotate it somehow to tell the compiler that is suitable for 
 compile time.
It's one of those impossible problems for the compiler to predict if it should attempt to execute a function at compile time or not. That's why compile time function evaluation is initiated with a specific syntax, rather than tried in general.
Perhaps that could be solved by automatic profiling. The worst offenders could be put into a list and the compiler would spend longer on those functions to optimize them. Also it would be something you could turn on and off with a compiler setting. -Joel
Aug 20 2008
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
JAnderson wrote:
 Walter Bright wrote:
 Don wrote:
 Note that even if all parameters to a function are known at compile 
 time, doesn't mean it's sensible to run it at compile time. (Example: 
 calculating pi to fifty billion decimal places). The programmer needs 
 to annotate it somehow to tell the compiler that is suitable for 
 compile time.
It's one of those impossible problems for the compiler to predict if it should attempt to execute a function at compile time or not. That's why compile time function evaluation is initiated with a specific syntax, rather than tried in general.
Perhaps that could be solved by automatic profiling. The worst offenders could be put into a list and the compiler would spend longer on those functions to optimize them. Also it would be something you could turn on and off with a compiler setting. -Joel
Optlink already has a feature like this. You can profile your application at runtime and then feed back to optlink a file which will cause it to layout the functions optimally in the binary [1]. Possibly a similar concept (possibly even the same file) could be fed to DMD to prioritize its optimizations. [1] http://www.digitalmars.com/ctg/trace.html
Aug 20 2008
parent reply JAnderson <ask me.com> writes:
Robert Fraser wrote:
 JAnderson wrote:
 Walter Bright wrote:
 Don wrote:
 Note that even if all parameters to a function are known at compile 
 time, doesn't mean it's sensible to run it at compile time. 
 (Example: calculating pi to fifty billion decimal places). The 
 programmer needs to annotate it somehow to tell the compiler that is 
 suitable for compile time.
It's one of those impossible problems for the compiler to predict if it should attempt to execute a function at compile time or not. That's why compile time function evaluation is initiated with a specific syntax, rather than tried in general.
Perhaps that could be solved by automatic profiling. The worst offenders could be put into a list and the compiler would spend longer on those functions to optimize them. Also it would be something you could turn on and off with a compiler setting. -Joel
Optlink already has a feature like this. You can profile your application at runtime and then feed back to optlink a file which will cause it to layout the functions optimally in the binary [1]. Possibly a similar concept (possibly even the same file) could be fed to DMD to prioritize its optimizations. [1] http://www.digitalmars.com/ctg/trace.html
I know but I think its more about ordering of functions rather then turning them into templates. -Joel
Aug 20 2008
parent reply bearophile <bearophileHUGS lycos.com> writes:
JAnderson:
 I know but I think its more about ordering of functions rather then 
 turning them into templates.
Profile-guided optimization (like the one performed by GCC with the -fprofile-generate/-fprofile-use compilation switches) is meant to do all kind of things, like finding cold/hot functions, find where some cache prefetch may be useful, what are the most frequent results of each branching in the code of the program, to spot what fields of the objects/arrays are frequently accessed close in time (to make those fields closer, or split objects in sub-groups), to find what virtual class methods can be compiled as static, what functions/methods to inline, what loops to unroll and how much unroll them, where to merge loops, where to split matrix iterations into tiles to reduce cache misses, where to make asm code short and where to make it faster&longer, where to put 'aligns' in the asm code, where to unroll recursive calls, where to not convert recursive tail calls into iterations, etc. At the moment GCC is able to performs some of such optimizations, the Intel C compiler is able to perform other ones, the Intel Fortran compiler is able to do other ones of them, HotSpot compiler of Java is able to do yet other ones. Eventually people will try to put all those together :-) There are even compilers like the Stalin Scheme compiler (http://community.schemewiki.org/?Stalin ) that try to perform lot of those things statically, without running the program :-) Bye, bearophile
Aug 21 2008
parent JAnderson <ask me.com> writes:
bearophile wrote:
 JAnderson:
 I know but I think its more about ordering of functions rather then 
 turning them into templates.
Profile-guided optimization (like the one performed by GCC with the -fprofile-generate/-fprofile-use compilation switches) is meant to do all kind of things, like finding cold/hot functions, find where some cache prefetch may be useful, what are the most frequent results of each branching in the code of the program, to spot what fields of the objects/arrays are frequently accessed close in time (to make those fields closer, or split objects in sub-groups), to find what virtual class methods can be compiled as static, what functions/methods to inline, what loops to unroll and how much unroll them, where to merge loops, where to split matrix iterations into tiles to reduce cache misses, where to make asm code short and where to make it faster&longer, where to put 'aligns' in the asm code, where to unroll recursive calls, where to not convert recursive tail calls into iterations, etc. At the moment GCC is able to performs some of such optimizations, the Intel C compiler is able to perform other ones, the Intel Fortran compiler is able to do other ones of them, HotSpot compiler of Java is able to do yet other ones. Eventually people will try to put all those together :-) There are even compilers like the Stalin Scheme compiler (http://community.schemewiki.org/?Stalin ) that try to perform lot of those things statically, without running the program :-) Bye, bearophile
Cool thanks.
Aug 21 2008
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Sorry for raising this thread again, but in the meantime I have found the C lib
I did see and that later I have lost, it shows some examples of partial
compilation in C. it includes docs, examples and full sources, even binaries:

http://www.diku.dk/topps/activities/cmix/

I presume something similar can be done in D too, maybe even by the compiler.

Bye,
bearophile
Sep 24 2008