Threads:
Code: Select all
A = 4 quanta, priority 2
B = 2 quanta, priority 1
C = 2 quanta, priority 1
Code: Select all
A A A A B C B C
Code: Select all
A B A C A B A C
Code: Select all
A = 4 quanta, priority 2
B = 2 quanta, priority 1
C = 2 quanta, priority 1
Code: Select all
A A A A B C B C
Code: Select all
A B A C A B A C
The first algorithm would be more efficient (less context switches), although I'd use "A A A A B B C C". This is what I call "variable time slice scheduling" where all threads get one time slice each and a thread's priority determines the length of it's time slice. The algorithm used in the scheduler for this can easily be O(1).kenna wrote:Which of these two algorithms would be more efficient? Or would it simply be a matter of taste?
Holy crap, that's a lot of linked lists! How big is each list entry? Just the PID (16- or 32-bit, I would presume, having not looked at your code), and a "next" field, or more info than that?Brendan wrote:To implement this I use 259 linked lists. The first 256 linked lists are used for Policy 0 with a seperate linked list for each thread priority (note: if there's 2 or more policy 0 threads with the same priority, then they take turns in a "round robin" way). The 257th linked list is for all policy 1 threads, then 258th linked list if for all policy 2 threads and the 259th linked list is for all policy 3 threads.
For my previous kernel each list entry was 2 bytes (thread ID of next entry on the list, or 0xFFFF if it was the last entry), but 2 bytes per thread is nothing when each thread needs a 2 KB or larger kernel stack, a 512 byte area to save FPU/MMX/SSE state, and other information.ByrdKernel wrote:Holy crap, that's a lot of linked lists! How big is each list entry? Just the PID (16- or 32-bit, I would presume, having not looked at your code), and a "next" field, or more info than that?Brendan wrote:To implement this I use 259 linked lists. The first 256 linked lists are used for Policy 0 with a seperate linked list for each thread priority (note: if there's 2 or more policy 0 threads with the same priority, then they take turns in a "round robin" way). The 257th linked list is for all policy 1 threads, then 258th linked list if for all policy 2 threads and the 259th linked list is for all policy 3 threads.
How much performance do I gain compared to what?ByrdKernel wrote:How much performance do you gain from doing it that way?
When the scheduler is looking for which thread to run next it takes the first thread on the highest priority queue/linked list, and removes that thread from the linked list. When the thread stops running it's placed back on the end of the linked list. This is all O(1) - you don't need to walk the linked lists.ByrdKernel wrote:I can see the obvious gains in flexibility, but it seems like you'd have to have an awful lot of threads competing for an awful little CPU time, and gaining gobs of it back just by using your method, before all those lists (and the memory they take up, and the CPU time it takes to walk them) suck the performance right out of it.
Hehe - same thing (another rewrite in the persuit of perfection)...ByrdKernel wrote:BTW, long time no see, dude! Wassup?