www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Weird timing issue with Thread.sleep

reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Take a look at this:

import std.stdio;
import core.thread;

void main()
{
    foreach (x; 0 .. 1000)
    {
        Thread.sleep(dur!("usecs")(999));
        writeln(x);
    }

    foreach (x; 0 .. 1000)
    {
        Thread.sleep(dur!("usecs")(1000));
        writeln(x);
    }
}

Compile and run it. The first foreach loop ends in an instant, while
the second one takes much much longer to finish, which is puzzling
since I've only increased the sleep while for a single microsecond.
What's going on?
Aug 03 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 03 Aug 2011 13:14:50 -0400, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 Take a look at this:

 import std.stdio;
 import core.thread;

 void main()
 {
     foreach (x; 0 .. 1000)
     {
         Thread.sleep(dur!("usecs")(999));
         writeln(x);
     }

     foreach (x; 0 .. 1000)
     {
         Thread.sleep(dur!("usecs")(1000));
         writeln(x);
     }
 }

 Compile and run it. The first foreach loop ends in an instant, while
 the second one takes much much longer to finish, which is puzzling
 since I've only increased the sleep while for a single microsecond.
 What's going on?
I can only imagine that the cause is the implementation is using an OS function that only supports millisecond sleep resolution. So essentially it's like sleeping for 0 or 1 millisecond. However, without knowing your OS, it's hard to say what's going on. On my linux install, the timing seems equivalent. -Steve
Aug 03 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
That could be the reason. I'm testing on Windows.

I was using sleep() as a quick hack around slowing down the framerate
of an OpenGL display. There are better way to do this but I didn't
have time to find a proper solution yet.
Aug 03 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-03 19:42, Andrej Mitrovic wrote:
 That could be the reason. I'm testing on Windows.

 I was using sleep() as a quick hack around slowing down the framerate
 of an OpenGL display. There are better way to do this but I didn't
 have time to find a proper solution yet.
Why would you want to slow down framerate? -- /Jacob Carlborg
Aug 03 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/3/11, Jacob Carlborg <doob me.com> wrote:
 Why would you want to slow down framerate?
Because the examples were written in the 90s and CPUs and graphic cards are so fast these days that the old code runs at an enormous framerate. Anyway, after a bit of googling I've found a solution: enum float FPS = 60.0; auto t_prev = Clock.currSystemTick(); while (!done) { auto t = Clock.currSystemTick(); if ((t - t_prev).usecs > (1_000_000.0 / FPS)) { t_prev = t; DrawGLScene(); } SwapBuffers(hDC); } I can also use currAppTick() which is similar. I'm using "enum float" instead of just "enum FPS" because creeping integer truncation bugs lurk into my code all the time. i.e. I end up having an expression like "var1 / var" evaluate to an integer instead of a float because a variable was declared as an integer. Here's what I mean: enum FPS = 60; void main() { auto fraction = (1 / FPS); // woops, actually returns 0 } Using "enum float FPS = 60;" fixes this. It's a very subtle thing and easily introducable as a bug.
Aug 03 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-08-03 20:36, Andrej Mitrovic wrote:
 On 8/3/11, Jacob Carlborg<doob me.com>  wrote:
 Why would you want to slow down framerate?
Because the examples were written in the 90s and CPUs and graphic cards are so fast these days that the old code runs at an enormous framerate.
I would say that the correct solution is to rewrite the examples to work with any CPU speed. But as you say, it's examples, may not be worth it. -- /Jacob Carlborg
Aug 03 2011
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/4/11, Jacob Carlborg <doob me.com> wrote:
 I would say that the correct solution is to rewrite the examples to work
 with any CPU speed.

 --
 /Jacob Carlborg
That's what I did. The framerate isn't clamped, and the threads don't sleep, there's no spinning going on, I've replaced all of that with timers. The old code used spinning in some examples, which of course maxes out an entire core. That's not how things should be done these days. :)
Aug 04 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 8/3/11, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
     if ((t - t_prev).usecs > (1_000_000.0 / FPS))
     {
         t_prev = t;
         DrawGLScene();
     }

     SwapBuffers(hDC);
My mistake here, SwapBuffers belongs inside the if body, there's an unrelated keyboard bug that made me push it there but I've found what's causing it. Anyway this is offtopic.
Aug 03 2011
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 03 Aug 2011 13:42:34 -0400, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 That could be the reason. I'm testing on Windows.
Windows only supports millisecond resolution. A valid solution to this is probably to have anything > 0 and < 1 ms sleep for at least 1ms. Or maybe it can round up to the next ms. For now, you can simply sleep for 1ms. -Steve
Aug 03 2011
prev sibling parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 03.08.2011, 19:21 Uhr, schrieb Steven Schveighoffer  
<schveiguy yahoo.com>:

 On Wed, 03 Aug 2011 13:14:50 -0400, Andrej Mitrovic  
 <andrej.mitrovich gmail.com> wrote:

 Take a look at this:

 import std.stdio;
 import core.thread;

 void main()
 {
     foreach (x; 0 .. 1000)
     {
         Thread.sleep(dur!("usecs")(999));
         writeln(x);
     }

     foreach (x; 0 .. 1000)
     {
         Thread.sleep(dur!("usecs")(1000));
         writeln(x);
     }
 }

 Compile and run it. The first foreach loop ends in an instant, while
 the second one takes much much longer to finish, which is puzzling
 since I've only increased the sleep while for a single microsecond.
 What's going on?
I can only imagine that the cause is the implementation is using an OS function that only supports millisecond sleep resolution. So essentially it's like sleeping for 0 or 1 millisecond. However, without knowing your OS, it's hard to say what's going on. On my linux install, the timing seems equivalent. -Steve
I would have guessed it comes down to time shares. If the scheduler works at a rate of 1000Hz, you get 1ms delays, if it works at 250Hz you get 4ms. Going down to an arbitrarily small sleep interval may be unfeasible. It's just an idea, I haven't actually looked that up.
Aug 14 2011