setitimer() inacurate time interval

Programming, for all ages and all languages.
Post Reply
User avatar
kenneth_phough
Member
Member
Posts: 106
Joined: Sun Sep 18, 2005 11:00 pm
Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
Contact:

setitimer() inacurate time interval

Post by kenneth_phough »

I am writing a program that utilizes the SIGALRM however when I set the time interval to 1000 for usec it gets rounded up to 10000 which is 10ms. This is fine for me because I just have to adjust the value of execution time to 6000 for 1 minute (60000milliseconds = 60 seconds but because the system rounds upto 10 milliseconds every 10 ms there is a signal so 6000 signals should be 1 minute) but when I time the execution time with my stop watch it goes over 1 minute "big time"!
What can I do to solve this problem?

Thanks,
Kenneth
User avatar
mystran
Member
Member
Posts: 670
Joined: Thu Mar 08, 2007 11:08 am

Post by mystran »

The normal rule of timers is that they never wake you up too early. If they wake you up too late, it's your problem, unless you are working under an RTOS which promised you some specific response time and resolution.

So it happens to be that Linux (I guess you're using Linux, similar consideration apply to other operating systems) sets the PIT (programmable interrupt timer) to around 10ms interval, which means that it gets interrupts once per 10ms. This means it only knows the exact time once per 10ms, that is when the interrupt occurs.

So (assuming there isn't a more accurate hardware timer available, don't rely on the rounding to be 10ms) the best the system can do, is take your request of 1ms, round it up to the timer interval (never too early rule) and then wait for one interrupt (to know where to start counting, so we can honor the "never too early" rule by having a known point in time to count from) and then another interrupt (to know when the time has elapse).

So you could wait anything between 10ms (first interrupt just when you've given the request) to 20ms (previous interrupt just before you give the request). But that is ofcourse not the whole story. You could actually wait anything from 10ms upwards. That's because nothing guarantees that when your timer expires, you actually get scheduled.

Now, since you get a SIGALRM when the timer expires, you'd think that if you don't get scheduled, you'd just get several signals at once if you had a periodic timer. But it's important to remember that normal Unix signals are not queued. There's exactly one bit per signal to keep track of whether there's a pending signal or not. If the signal is already pending when another arrives, it'll just get lost.

So the end result is that you can't keep track of time using SIGALRM. You can use it to wake you up after at least the specified amount of time has elapsed (and often much more), but that's all. To keep track of real time, you want to use gettimeofday(2) if you need sub-second resolution or time(2) if seconds is fine.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
User avatar
kenneth_phough
Member
Member
Posts: 106
Joined: Sun Sep 18, 2005 11:00 pm
Location: Williamstown, MA; Worcester, MA; Yokohama, Japan
Contact:

Post by kenneth_phough »

Thanks! I think I'll try the gettimeofday() and see how optimal my program gets.

Yours,
Kenneth
Post Reply