Separate Stack Segment in Protected mode?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Separate Stack Segment in Protected mode?

Post by rdos »

16bitPM wrote:
rdos wrote: Hardware task-switching requires these to run in a single step, which is a bad idea.
Can you expand on that?
There are two versions of hardware taskswitching:

1. jmp tss. This will save the registers of the current task in the current TSS, set the current task to available, load registers for the new task and set it as busy.

2. Call tss or interrupt tss. This will do the same as jmp tss, but also set the NT flag and set the backlink in the new tss. When a ret or iret occurs with NT set, it will load the backlink from the tss, do the "jmp tss" stuff, and clear NT flag and backlink.

This means that the scheduler must decide which task to run next in the context of the current task. This means it likely will need to save several registers on the stack, and restore them before the task switch. The jmp tss instruction also needs to use a register, and so the saved registers in the TSS will not truely reflect the task registers since at least one register must be on the stack. Another problem is that TSS segments cannot be loaded in segment registers, and so alias segments must be setup to be able to read or modify registers of a thread.

With software taskswitching, the registers of the current task can be saved directly in the TSS, a per-core stack can be setup, then it can decide which task to run next, set the TR register and load the registers from the TSS.

Task termination poses particular problems with hardware taskswitching. In this case the registers should not be save at all, and with software taskswitching the scheduler will instead go directly to load the per-core stack instead of saving registers.

Most importantly, hardware taskswitching modifies TSS types, which is highly problematic in multicore systems as this will cause race conditions.

Still, hardware taskswitching is the best option for handling double faults.

Some people believe you don't need the TSS at all, and that registers can be saved & restored from the kernel stack. This is not true. The TSS still needs to contain the kernel stack entry point and the IO permission bitmap. It's a lot better to actually have one TSS per task, and set these up with the kernel stack and IO permission bitmap. Particularly when supporting V86 mode, there is a need to trap IO accesses and it might be useful to allow user applications to directly access some IO ports. These mechanisms require the possibility to have different IO permission bitmaps for different tasks. Since the IO permission bitmap has a pointer in the TSS, it's possible to put the thread control area at the end of the TSS, and to alias it to the TSS. This means that each thread will need two different selectors, one for the TSS and one for thread control, but that these map the same memory area so the TSS can easily be modified. The registers can be saved & restored at the proper positions in the TSS, or in the thread control area.

I use the fact that registers are in the thread control area in my kernel debugger. When a thread faults, executes int 3 or gets a single step interrupt, the registers are saved by the scheduler, and the task is inserted into the debugged list. The kernel debugger then can show the register context, let the user modify registers or memory, and single step or restart the task. This would have been overly complex if thread registers were saved on the stack. This is lso integrated with the application debugger so the register context is still saved in the thread control area even when user mode code is debugged. It also lets the application debugger trace into kernel space.
Post Reply