www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Resolution of core.time.Duration...

reply Alexander <aldem+dmars nk7.net> writes:
...why it is in hnsec? I know that this resolution is used in Win32 API (file
time), but since TickDuration may be 1 ns resolution, wouldn't it be better to
make Duration to be stored with maximum (defined so far) resolution?

Especially because Duration may not hold long intervals (> months) - so there
is no problem with overflow.

Thread.sleep() accepts Duration (or hnsec) as an argument, while system
resolution is higher, and on some systems it is even possible that it can sleep
less than 100ns.

SysTime is also kept in hnsecs, while resolution of system time (on Linux at
least) is 1ns. Sure, in case of SysTime it is all bound to overflow, but it
depends how value is stored - if we split seconds and nanoseconds, it will be
fine.

Additionally, when accepting long values as an argument for duration it is more
logically to use SI units :)

/Alexander
May 17 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-05-17 03:15, Alexander wrote:
 ...why it is in hnsec? I know that this resolution is used in Win32 API
 (file time), but since TickDuration may be 1 ns resolution, wouldn't it be
 better to make Duration to be stored with maximum (defined so far)
 resolution?
 
 Especially because Duration may not hold long intervals (> months) - so
 there is no problem with overflow.
 
 Thread.sleep() accepts Duration (or hnsec) as an argument, while system
 resolution is higher, and on some systems it is even possible that it can
 sleep less than 100ns.
 
 SysTime is also kept in hnsecs, while resolution of system time (on Linux
 at least) is 1ns. Sure, in case of SysTime it is all bound to overflow,
 but it depends how value is stored - if we split seconds and nanoseconds,
 it will be fine.
 
 Additionally, when accepting long values as an argument for duration it is
 more logically to use SI units :)
 
 /Alexander
hnsecs is the highest resolution that you can hold in a 64-bit integer and still have it cover a reasonable range. And remember that a Duration is what's used as the result of subtracting one SysTime from another, so making it more precise would be bad. As it is, it can't actually hold the difference of SysTime.max - SysTime.min. Because it's signed, it can only hold half that range. Also, the system clock definitely does not reach hnsecs on any system that I've seen. It generally maxes out at microseconds or a bit better than that. So, in most cases, using a precision greater than hnsecs doesn't gain you anything. I'd love to have greater precision, but it would require something larger than a 64-bit integer, which we don't have. Or it would require splitting up the seconds and sub-seconds which makes the types take up more space and makes the math that much worse. During the development and review process, it was decided that hnsecs was ideal. Sure, higher precision might be nice, but we don't have anything bigger than a 64-bit integer, and odds are that you couldn't really take advantage of the higher precision anyway. - Jonathan M Davis
May 17 2011
parent reply Alexander <aldem+dmars nk7.net> writes:
On 17.05.2011 12:34, Jonathan M Davis wrote:

 Also, the system clock definitely does not reach hnsecs on any system that
I've seen.
Duration (unlike TickDuration) is not tied to system clock only, AFAIK (at least, this is not mentioned in the documentation). Using rdtsc it is possible to obtain even more precision than 1ns. Also, Duration can be used to store intervals which are not really related to system clock at all.
 Or it would require splitting up the seconds and sub-seconds which makes the
types take up 
 more space and makes the math that much worse.
OK, though I don't think that 4 or 8 bytes more will really make any noticeable change (many Phobos/druntime structures are over-sized anyway, IMHO) - time values are not used in big arrays, usually. As to math - for most operations involving calendar computations only seconds are relevant, and math itself takes literally nanoseconds to execute - so only functions which actually require better precision may use more complex math. What could be done, though - is to hide the internal format (hnsecs), i.e. do not expose it to functions like Thread.sleep() with ulong argument. /Alexander
May 17 2011
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 17.05.2011 15:44, Alexander wrote:
 On 17.05.2011 12:34, Jonathan M Davis wrote:

 Also, the system clock definitely does not reach hnsecs on any system that
I've seen.
Duration (unlike TickDuration) is not tied to system clock only, AFAIK (at least, this is not mentioned in the documentation). Using rdtsc it is possible to obtain even more precision than 1ns.
Actually, rdtsc is limited and useful mostly for performance measures, and has it's fare share of caveats: http://msdn.microsoft.com/en-us/library/ee417693.aspx or more practical: http://www.strchr.com/performance_measurements_with_rdtsc (also there are processors with dynamic frequency...) And like you said there are no APIs that use such precision.
    Also, Duration can be used to store intervals which are not really related
to system clock at all.

 Or it would require splitting up the seconds and sub-seconds which makes the
types take up
 more space and makes the math that much worse.
OK, though I don't think that 4 or 8 bytes more will really make any noticeable change (many Phobos/druntime structures are over-sized anyway, IMHO) - time values are not used in big arrays, usually. As to math - for most operations involving calendar computations only seconds are relevant, and math itself takes literally nanoseconds to execute - so only functions which actually require better precision may use more complex math.
The gain of using these fat integers is obscure so far. Unless, again, we are talking about small scale CPU performance measures.
    What could be done, though - is to hide the internal format (hnsecs), i.e.
do not expose it to functions like Thread.sleep() with ulong argument.
Right.
 /Alexander
-- Dmitry Olshansky
May 17 2011
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 17 May 2011 06:15:54 -0400, Alexander <aldem+dmars nk7.net> wrote:

 ...why it is in hnsec? I know that this resolution is used in Win32 API  
 (file time), but since TickDuration may be 1 ns resolution, wouldn't it  
 be better to make Duration to be stored with maximum (defined so far)  
 resolution?
if you use hnsecs, then you get a range of SysTime of -30k to 30k years. That might seem overkill, but consider that even going to dnsecs (10 nanoseconds) reduces your range to -3k to +3k years. The problem is that nobody is likely to care about that extra 10 intervals, but losing out on 27,000 years x 2 is pretty significant. It seems like a no-brainer to me. straight nanoseconds are not possible, because we couldn't even represent our current date with it. Plus, we have good precedence, both Microsoft and Tango use that tick duration. It's a natural conclusion.
 Especially because Duration may not hold long intervals (> months) - so  
 there is no problem with overflow.
A Duration is the result of subtracting two SysTime's, which uses hnsecs as its tick, so yeah, there is a problem with overflow if you use a finer resolution.
 Thread.sleep() accepts Duration (or hnsec) as an argument, while system  
 resolution is higher, and on some systems it is even possible that it  
 can sleep less than 100ns.
The minimum sleep time for a thread is one clock period. If your OS is context switching more than once per 100ns, your OS is going to be doing nothing but context switching. Processors just aren't fast enough to deal with that (and likely won't ever be). 100 ns is a reasonable resolution for that. Real time applications may require more precise timing, but you would likely need a separate API for that.
 SysTime is also kept in hnsecs, while resolution of system time (on  
 Linux at least) is 1ns. Sure, in case of SysTime it is all bound to  
 overflow, but it depends how value is stored - if we split seconds and  
 nanoseconds, it will be fine.
Again, the resolution of the *structure* may be nsecs, but the actual intervals you have access to is about every 4ms on linux ( see http://en.wikipedia.org/wiki/Jiffy_(time) ). If it makes you feel better to use higher resolution timing, the facilities are there, just use the C system calls.
 Additionally, when accepting long values as an argument for duration it  
 is more logically to use SI units :)
I agree that accepting a long as an alternative to Duration, it makes sense to use a more normal tick resolution. The chances of someone wanting to have their process sleep for more than 300 years (e.g. for nanosecond resolution) is pretty small. This might be a worthwhile change. I'm not sure how much code this might affect, though. It would be plenty disturbing if your code started sleeping for 100ms instead of the 10s you thought you requested. What might be a good path is to disable those functions that accept a long for a few releases, then re-instate them with a new meaning. -Steve
May 17 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 17.05.2011 15:25, schrieb Steven Schveighoffer:
 
 I agree that accepting a long as an alternative to Duration, it makes
 sense to use a more normal tick resolution.  The chances of someone
 wanting to have their process sleep for more than 300 years (e.g. for
 nanosecond resolution) is pretty small.  This might be a worthwhile change.
 
 I'm not sure how much code this might affect, though.  It would be
 plenty disturbing if your code started sleeping for 100ms instead of the
 10s you thought you requested.  What might be a good path is to disable
 those functions that accept a long for a few releases, then re-instate
 them with a new meaning.
 
Or just add nanoSleep() or something like that. Cheers, - Daniel
May 17 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 17 May 2011 09:42:27 -0400, Daniel Gibson <metalcaedes gmail.com>  
wrote:

 Am 17.05.2011 15:25, schrieb Steven Schveighoffer:
 I agree that accepting a long as an alternative to Duration, it makes
 sense to use a more normal tick resolution.  The chances of someone
 wanting to have their process sleep for more than 300 years (e.g. for
 nanosecond resolution) is pretty small.  This might be a worthwhile  
 change.

 I'm not sure how much code this might affect, though.  It would be
 plenty disturbing if your code started sleeping for 100ms instead of the
 10s you thought you requested.  What might be a good path is to disable
 those functions that accept a long for a few releases, then re-instate
 them with a new meaning.
Or just add nanoSleep() or something like that.
Probably a good idea, and deprecate Thread.sleep(long). It's more self-documenting. -Steve
May 17 2011
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 On Tue, 17 May 2011 09:42:27 -0400, Daniel Gibson <metalcaedes gmail.com>
 
 wrote:
 Am 17.05.2011 15:25, schrieb Steven Schveighoffer:
 I agree that accepting a long as an alternative to Duration, it makes
 sense to use a more normal tick resolution. The chances of someone
 wanting to have their process sleep for more than 300 years (e.g. for
 nanosecond resolution) is pretty small. This might be a worthwhile
 change.
 
 I'm not sure how much code this might affect, though. It would be
 plenty disturbing if your code started sleeping for 100ms instead of the
 10s you thought you requested. What might be a good path is to disable
 those functions that accept a long for a few releases, then re-instate
 them with a new meaning.
Or just add nanoSleep() or something like that.
Probably a good idea, and deprecate Thread.sleep(long). It's more self-documenting.
I very much support the idea of deprecating all of the functions in druntime and phobos which take naked time values. And then if a function that sleeps at nanosecond resolution is considered to be of real value (which I question), then we can add that as something like nanoSleep. I believe that at present, most - if not all - of the function in druntime in Phobos which take a naked time value (aside from std.date) also take a Duration now, but I don't think that any of the old versions which take naked values are scheduled for deprecation. If there is a valid reason for wanting to keep any of them around, then we should consider it, but I'm definitely inclined to get rid of them in favor of the greater safety and maintainability of functions which take a Duration. - Jonathan M Davis
May 17 2011
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
On May 17, 2011, at 11:06 AM, Jonathan M Davis wrote:
=20
 I very much support the idea of deprecating all of the functions in =
druntime=20
 and phobos which take naked time values. And then if a function that =
sleeps at=20
 nanosecond resolution is considered to be of real value (which I =
question),=20
 then we can add that as something like nanoSleep.
That was the plan. I just wanted a few releases with the new Duration = routines before deprecating the old ones. Now sounds like a good time.=
May 17 2011
prev sibling parent reply Alexander <aldem+dmars nk7.net> writes:
On 17.05.2011 15:25, Steven Schveighoffer wrote:

 if you use hnsecs, then you get a range of SysTime of -30k to 30k years.  That
might seem overkill, but consider that even going to dnsecs (10 nanoseconds)
reduces your range to -3k to +3k years.  The problem is that nobody is likely
to care about
 that extra 10 intervals, but losing out on 27,000 years x 2 is pretty
significant.  It seems like a no-brainer to me.
Well, if you put it this way, it seems reasonable.
 Plus, we have good precedence, both Microsoft and Tango use that tick
duration.  It's a natural conclusion.
Linux/Posix are using ns (clock_gettime(), nanosleep() etc - timespec) - I guess there is a reason for this.
 Again, the resolution of the *structure* may be nsecs, but the actual
intervals you have access to is about every 4ms on linux ( see
http://en.wikipedia.org/wiki/Jiffy_(time) ).
Not really. Take a difference from two consecutive get_clock() calls on Linux, and you will see that it is far below 1 µs (depends on CPU, though). Not all systems use timer interrupt for timekeeping.
 I agree that accepting a long as an alternative to Duration, it makes sense to
use a more normal tick resolution.  The chances of someone wanting to have
their process sleep for more than 300 years (e.g. for nanosecond resolution) is
pretty small. 
 This might be a worthwhile change.
Well, this is, actually, the whole reason of my post :) While 100ns resolution seems reasonable (after your explanations), accepting 100ns intervals as values directly seems not really good idea. /Alexander
May 17 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 17 May 2011 09:55:49 -0400, Alexander <aldem+dmars nk7.net> wrote:

 On 17.05.2011 15:25, Steven Schveighoffer wrote:

 Plus, we have good precedence, both Microsoft and Tango use that tick  
 duration.  It's a natural conclusion.
Linux/Posix are using ns (clock_gettime(), nanosleep() etc - timespec) - I guess there is a reason for this.
I just read the man page on time (page 7) and indeed, there is support for sub-jiffy timing. And Duration will support that notion quite well. If you need sub-hnsec timing, you can call these functions directly. The structures have used nanoseconds for over 10 years (I think gettimeofday used it back in the 90s!), so the reason for using it was likely for future compatibility (clearly nanosecond timing wasn't possible back then). It looks like the future is now, so it's good to have that resolution. I still maintain that for things like sleeping, it is pointless to sleep less than 100ns since it likely takes longer than that to do a context switch, and your sleep time is only guaranteed to be *greater* than what you requested. The only time this is useful is for very specific real-time applications. I could be wrong, there have obviously been improvements to the timing mechanisms of Linux since I last looked at it. As for measuring time, yes, it would be good to use a higher precision timer. And in fact, std.datetime.StopWatch does just that. http://www.digitalmars.com/d/2.0/phobos/std_datetime.html#StopWatch The core.time.Duration type is for low-level timing facilities, such as waiting on a condition or sleeping a thread. It is also beneficial to re-use the structure in std.datetime for timing facilities that are generically useful for most cases. This allows one to fluently do date calculations, timing calculations and pass them directly to low level facilities. There will always be those cases where the time resolution is not enough, and for those cases, you will need to use a more specialized API. coarser resolution timing. IMO dealing with times less than 1ms is pretty specialized.
 Again, the resolution of the *structure* may be nsecs, but the actual  
 intervals you have access to is about every 4ms on linux ( see  
 http://en.wikipedia.org/wiki/Jiffy_(time) ).
Not really. Take a difference from two consecutive get_clock() calls on Linux, and you will see that it is far below 1 µs (depends on CPU, though). Not all systems use timer interrupt for timekeeping.
I stand corrected, I was not aware of these timing facilities.
 I agree that accepting a long as an alternative to Duration, it makes  
 sense to use a more normal tick resolution.  The chances of someone  
 wanting to have their process sleep for more than 300 years (e.g. for  
 nanosecond resolution) is pretty small.
 This might be a worthwhile change.
Well, this is, actually, the whole reason of my post :) While 100ns resolution seems reasonable (after your explanations), accepting 100ns intervals as values directly seems not really good idea.
Yes, hnsecs is not a typical concept one has to deal with. -Steve
May 17 2011
parent reply Alexander <aldem+dmars nk7.net> writes:
On 17.05.2011 16:45, Steven Schveighoffer wrote:

 The structures have used nanoseconds for over 10 years (I think gettimeofday
used it back in the 90s!), so the reason for using it was likely for future
compatibility (clearly nanosecond timing wasn't possible back then).  It looks
like the future is
 now, so it's good to have that resolution.
As to gettimeofday() - it is using timeval, which has 1µs resolution - still quite good for most applications.
 As for measuring time, yes, it would be good to use a higher precision timer. 
And in fact, std.datetime.StopWatch does just that.
Just in case - StopWatch is used in benchmarking functions while measuring wall-clock time, and this may produce incorrect results on busy systems when benchmarking CPU-intensive code. /Alexander
May 17 2011
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 On 17.05.2011 16:45, Steven Schveighoffer wrote:
 The structures have used nanoseconds for over 10 years (I think
 gettimeofday used it back in the 90s!), so the reason for using it was
 likely for future compatibility (clearly nanosecond timing wasn't
 possible back then). It looks like the future is now, so it's good to
 have that resolution.
As to gettimeofday() - it is using timeval, which has 1µs resolution - still quite good for most applications.
 As for measuring time, yes, it would be good to use a higher precision
 timer. And in fact, std.datetime.StopWatch does just that.
Just in case - StopWatch is used in benchmarking functions while measuring wall-clock time, and this may produce incorrect results on busy systems when benchmarking CPU-intensive code.
StopWatch uses a mononotonic clock. - Jonathan M Davis
May 17 2011
parent reply Alexander <aldem+dmars nk7.net> writes:
On 17.05.2011 20:06, Jonathan M Davis wrote:

 StopWatch uses a mononotonic clock.
Monotonic clock is not CPU-usage-bound - it is in sync with wall-time, so problem on busy systems remains. /Alexander
May 17 2011
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 On 17.05.2011 20:06, Jonathan M Davis wrote:
 StopWatch uses a mononotonic clock.
Monotonic clock is not CPU-usage-bound - it is in sync with wall-time, so problem on busy systems remains.
A monotonic clock is as good as you're going to get for accurate stopwatch functionality. The system cannot possibly do any better than that. Context switching can always get in the way. Increasing precision doesn't help that. And since StopWatch uses TickDuration, it has the highest precision that the the sytem has anyway (whatever its clock tick is) rather than hnsecs. I don't understand what your concern is. StopWatch is using the highest precision, most accurate clock that it can, and there's no way to stop issues with context switching. What more do you expect it to do? You're talking like there's something wrong with std.datetime's benchmarking functionality, and as far as I know, it has the best that is possible as far as timing accuracy goes. - Jonathan M Davis
May 17 2011
parent reply Alexander <aldem+dmars nk7.net> writes:
On 18.05.2011 01:18, Jonathan M Davis wrote:

 A monotonic clock is as good as you're going to get for accurate stopwatch 
 functionality. The system cannot possibly do any better than that. Context 
 switching can always get in the way. Increasing precision doesn't help that. 
Probably, you have misunderstood me - I wasn't talking about precision. I was talking about the usage of real-time clock for benchmarking, which may be incorrect when you use real-time clock for measuring performance of CPU-bound tasks. Say, you have to benchmark something that heavily is using CPU - and this takes 10s. But when the system is doing something else - the real-time clock may differ significantly to CPU-clock. So, in my example, 10s of CPU-intensive work may take 20s of real-time, if another CPU-bound task is running at the same time, thus, benchmark results will be incorrect. /Alexander
May 18 2011
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-05-18 02:13, Alexander wrote:
 On 18.05.2011 01:18, Jonathan M Davis wrote:
 A monotonic clock is as good as you're going to get for accurate
 stopwatch functionality. The system cannot possibly do any better than
 that. Context switching can always get in the way. Increasing precision
 doesn't help that.
Probably, you have misunderstood me - I wasn't talking about precision. I was talking about the usage of real-time clock for benchmarking, which may be incorrect when you use real-time clock for measuring performance of CPU-bound tasks. Say, you have to benchmark something that heavily is using CPU - and this takes 10s. But when the system is doing something else - the real-time clock may differ significantly to CPU-clock. So, in my example, 10s of CPU-intensive work may take 20s of real-time, if another CPU-bound task is running at the same time, thus, benchmark results will be incorrect.
To my knowledge, using the system's monotonic clock is the absolute best that you're going to get for stuff like benchmarking. Are you suggesting that there is an alternative which is better? As far as I know, the issues of context switching and whatnot are just a part of life and that you can't do anything about them except restrict what you're running on your computer when you run benchmarks. - Jonathan M Davis
May 18 2011
parent Alexander <aldem+dmars nk7.net> writes:
On 18.05.2011 12:12, Jonathan M Davis wrote:

 To my knowledge, using the system's monotonic clock is the absolute best that
you're going to get for stuff like benchmarking.
It depends. If system is not busy, then it doesn't really matter - which clock to use, especially if you take average over several runs. If you benchmark I/O performance, then you need real-time clock and idle system - no way around. But if you benchmark CPU performance, the you have to use CPU clock - i.e. the clock which is increasing only when application is actually using CPU.
 Are you suggesting that there is an alternative which is better?
Sure, there is - clock_gettime with CLOCK_THREAD_CPUTIME_ID or CLOCK_PROCESS_CPUTIME_ID as clock id. This way, you will get exactly the amount of time that was spent while specific thread was using CPU.
 As far as I know, the issues of context  switching and whatnot are just a part
of life and that you can't do anything 
 about them except restrict what you're running on your computer when you run
benchmarks.
The way I've described it (CLOCK_THREAD_CPUTIME_ID or CLOCK_PROCESS_CPUTIME_ID), or using simple clock() function (less precise, though) you can do this. It only doesn't work well with measuring I/O performance, as CPU is mostly waiting, so CPU time is useless. I would propose extension of StopWatch - an option to specify which clock to use, CPU or real-time (monotonic is real-time). /Alexander
May 18 2011
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-05-17 12:34, Sean Kelly wrote:
 On May 17, 2011, at 11:06 AM, Jonathan M Davis wrote:
 I very much support the idea of deprecating all of the functions in
 druntime and phobos which take naked time values. And then if a function
 that sleeps at nanosecond resolution is considered to be of real value
 (which I question), then we can add that as something like nanoSleep.
That was the plan. I just wanted a few releases with the new Duration routines before deprecating the old ones. Now sounds like a good time.
As I understand it, deprecation is supposed to be a 3 stage process. 1. Mark the item to be deprecated as scheduled to be deprecated in the documentation, and if possible, give it a pragma which says it as well (though that generally involves turning the function in question into a template function if it isn't already so that the pragma will only kick in if the function is actually used). 2. Mark it with deprecated so that -d is required to use it. 3. Fully remove it from the code. most part, people have likely kept on using them as they have been. So, they should probably be marked as scheduled for deprecation for at least a release before they're actually deprecated. Unfortunately, we haven't actually decided on how long each phase of deprecation is supposed to be. It's been brought up a time or two, but no decision has ever been made. I keep intending to bring it up again, since the last couple of releases have seen several items enter phase 1, but I keep forgetting. - Jonathan M Davis
May 17 2011