jnc100 wrote:
Okay, so two issues here. For task switching, if I understand you correctly (please let me know if I don't) then 'standard' task switching involves setting the timer (after reprogramming in to fire at a certain time and keeping track of actual seconds yourself etc) and therefore performing a task switch at certain intervals (at the end of a timeslice). My method is to generate task switch instructions in the code we compile a certain number of instructions apart. Given that all the basic instructions execute in a comparable (short) amount of time, it is possible to approximately place these instructions a certain number of milliseconds apart. This really wouldn't require any more overhead than responding to interrupts.
Wrong.
Consider a simple function like memset():
Code: Select all
void memset(void * p, int c, unsigned long n) {
unsigned i;
char * b = p;
for(i = 0; i < n; i++) {
b[i] = (char) c;
}
}
The amount of work done per iteration is very small, but the number of iterations could be large. It could take a long time. So you have to place a test within the loop body (leading to overhead), or unroll the loop a few times, and then put a test within the unrolled loop body (leading to less direct overhead, but using more memory and therefore increasing the cache footprint).
Ofcourse the example is trivial, but demonstrates the point. In any turing-complete (= general purpose) language, you can't even always prove that a given piece of code ever terminates (or that it fails to terminate, for that matter). So you need the tests, and the overhead.
Ofcourse you only need one test in the body (unrolled or not), and this is why I said, that in general you need one test per basic block (basic block is a segment of code where there are no jumps).
No matter how fast you make the tests, you lose, because interrupts are
completely free, as long as no interrupts occur. So I repeat my point: polling is bad, interrupts are good. That's why processors have interrupts in the first place.
To be honest, I hadn't really thought about a sleep() function.
Consider: you can read the wall clock time from the RTC. It gives you accuracy of one second (and reading it has overhead, but that's not the point). Are you sure you never need to know time (relative time between events mostly) more accurately than by second? There might or might not be other means of getting more accurate time. PIT is always available, and can easily give you around 10ms accuracy, on modern computers even more.
How about power saving, and CPU heating? Are you sure you want to run with 100% CPU use simply because the user told the computer to play an MP3. That's less than a 1% work on a modern computer. Ofcourse if you want to avoid all interrupts, you also need the user interface to check if the user moved a mouse or pressed a key, or whether it's time to feed the soundcard the next block of the audio. So you have to run at 100% CPU use all the time anyway. Laptop batteries last a few hours, because most of the time the CPU is sleeping. Many overclocked and badly cooled computers will crash if you don't let them cooldown by sleeping at times. Even the rest will unnecessarily waste power. When you HLT the CPU in order to cooldown/save power, the only way to wake up is by an interrupt.
Point being, you need to deal with interrupts anyway, and if you are going to deal with them anyway, you might as well generate normal code, and let interrupts take care of multi-tasking, saving you the overhead of polling.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.