www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Empy program running time

reply bearophile <bearophileHUGS lycos.com> writes:
On an oldish Windows PC an empty C program generated by GCC takes about 0.03
seconds to run. An empty D2 program runs in about 0.11 seconds. Is this
expected/good/acceptable/fixable?

Bye,
bearophile
Jul 29 2011
next sibling parent Pelle <pelle.mansson gmail.com> writes:
On Fri, 29 Jul 2011 15:37:36 +0200, bearophile <bearophileHUGS lycos.com>  
wrote:

 On an oldish Windows PC an empty C program generated by GCC takes about  
 0.03 seconds to run. An empty D2 program runs in about 0.11 seconds. Is  
 this expected/good/acceptable/fixable?

 Bye,
 bearophile
That's a lot better than I expected! I don't think anyone would notice such a small difference.
Jul 29 2011
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 29 Jul 2011 09:37:36 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 On an oldish Windows PC an empty C program generated by GCC takes about  
 0.03 seconds to run. An empty D2 program runs in about 0.11 seconds. Is  
 this expected/good/acceptable/fixable?
It is expected. A d program must initialize both the C runtime and the D runtime, whereas a C program only needs to initialize C. For example, D needs to call all the module ctors and do the import cycle detection algorithm. 0.11 seconds is not unreasonable. But I'll also stress that timings at this precision can be fairly volatile. -Steve
Jul 29 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:

 For example, D needs to call all the module ctors and do the import cycle  
 detection algorithm.
Even for an empty program?
 0.11 seconds is not unreasonable.
It means about 170_000_000 CPU clock ticks, to me it seems a lot.
 But I'll also stress that timings at this precision can be fairly volatile.
I think this timing precision is not so bad. Bye, bearophile
Jul 29 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 29 Jul 2011 10:50:52 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 Steven Schveighoffer:

 For example, D needs to call all the module ctors and do the import  
 cycle
 detection algorithm.
Even for an empty program?
Yes. I bet even an empty program has on the order of 50 module ctor/dtors to run. All the runtime can contain module ctor/dtors, and those modules are compiled in no matter what. Plus dmd adds hidden modules with ctor/dtors. There are other initializations, such as the GC, setting up the main thread, etc.
 0.11 seconds is not unreasonable.
It means about 170_000_000 CPU clock ticks, to me it seems a lot.
I guess if you want to have an empty program benchmark? It doesn't seem to mean much to me... If it was on the order of seconds, I'd agree with you. .11 seconds is barely noticeable.
 But I'll also stress that timings at this precision can be fairly  
 volatile.
I think this timing precision is not so bad.
What I mean is, the timing for a program can easily vary by 10ths of a second between runs, depending on what's happening on the computer. Make sure you do an average, and not a single run to do these kinds of tests. -Steve
Jul 29 2011
parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 29.07.2011, 17:23 Uhr, schrieb Steven Schveighoffer  
<schveiguy yahoo.com>:

 On Fri, 29 Jul 2011 10:50:52 -0400, bearophile  
 <bearophileHUGS lycos.com> wrote:

 Steven Schveighoffer:

 For example, D needs to call all the module ctors and do the import  
 cycle
 detection algorithm.
Even for an empty program?
Yes. I bet even an empty program has on the order of 50 module ctor/dtors to run. All the runtime can contain module ctor/dtors, and those modules are compiled in no matter what. Plus dmd adds hidden modules with ctor/dtors. There are other initializations, such as the GC, setting up the main thread, etc.
 0.11 seconds is not unreasonable.
It means about 170_000_000 CPU clock ticks, to me it seems a lot.
I guess if you want to have an empty program benchmark? It doesn't seem to mean much to me... If it was on the order of seconds, I'd agree with you. .11 seconds is barely noticeable.
You think in the wrong category. Imagine where this would matter, where you would invoke a program multiple times. From the top of my head there is batch execution. If your program is a converter of some sort that takes one input file and one output file at a time and someone writes a script to convert all files in a directory structure. Every 545 files you get an additional minute of initialization on that test machine! If the actual conversion algorithms is fast and the files are small (i.e. converting character encodings in text files) this is well noticeable. A more abstract idea is the boot process of an old-school Linux installation. Every start script (mail server, keyboard layout, swap, logging, ...) invokes the shell several times. If the shell was written in D it would slow down the boot process more than necessary. But here efforts were made to reduce the amount of processes spawned during the boot process so this can not be a valid argument. The "file" utility opens files and prints their mime-type looking at magic bytes and other identifiers. This is another good example of a program that may be run on a large number of files, but doesn't run for long. It could be used by a file system browser to display the file-type of every file in a directory.
 But I'll also stress that timings at this precision can be fairly  
 volatile.
I think this timing precision is not so bad.
What I mean is, the timing for a program can easily vary by 10ths of a second between runs, depending on what's happening on the computer. Make sure you do an average, and not a single run to do these kinds of tests. -Steve
Aug 18 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Marco Leise:

 You think in the wrong category. Imagine where this would matter, where  
 you would invoke a program multiple times.
I agree, and I think 0.11 seconds (on a slow PC) is a bit too much. I think there is something smelly that will need tuning/optimization. Bye, bearophile
Aug 18 2011
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 19 Aug 2011 01:29:05 -0400, Marco Leise <Marco.Leise gmx.de> wrote:

 Am 29.07.2011, 17:23 Uhr, schrieb Steven Schveighoffer  
 <schveiguy yahoo.com>:

 On Fri, 29 Jul 2011 10:50:52 -0400, bearophile  
 <bearophileHUGS lycos.com> wrote:

 Steven Schveighoffer:

 For example, D needs to call all the module ctors and do the import  
 cycle
 detection algorithm.
Even for an empty program?
Yes. I bet even an empty program has on the order of 50 module ctor/dtors to run. All the runtime can contain module ctor/dtors, and those modules are compiled in no matter what. Plus dmd adds hidden modules with ctor/dtors. There are other initializations, such as the GC, setting up the main thread, etc.
 0.11 seconds is not unreasonable.
It means about 170_000_000 CPU clock ticks, to me it seems a lot.
I guess if you want to have an empty program benchmark? It doesn't seem to mean much to me... If it was on the order of seconds, I'd agree with you. .11 seconds is barely noticeable.
You think in the wrong category. Imagine where this would matter, where you would invoke a program multiple times. From the top of my head there is batch execution. If your program is a converter of some sort that takes one input file and one output file at a time and someone writes a script to convert all files in a directory structure. Every 545 files you get an additional minute of initialization on that test machine! If the actual conversion algorithms is fast and the files are small (i.e. converting character encodings in text files) this is well noticeable. A more abstract idea is the boot process of an old-school Linux installation. Every start script (mail server, keyboard layout, swap, logging, ...) invokes the shell several times. If the shell was written in D it would slow down the boot process more than necessary. But here efforts were made to reduce the amount of processes spawned during the boot process so this can not be a valid argument. The "file" utility opens files and prints their mime-type looking at magic bytes and other identifiers. This is another good example of a program that may be run on a large number of files, but doesn't run for long. It could be used by a file system browser to display the file-type of every file in a directory.
I agree for simple frequently used utilities, the initialization time can be bad. But I have actually written scripting utility programs (ironically, to help boot a custom linux OS that I built), and I didn't notice terrible slowdown. Another question is, would someone who is using D be more likely to write their script in D, and simply use D libraries, or write a "file" utility in D, and use a shell script? It all depends on the situation, and to me, .11 seconds isn't that terrible. If it can be improved, then I think we should do it, but IMO it's not a critical piece right now. -Steve
Aug 19 2011