Interrupt gates vs. trap gates (interrupts disabled/enabled)

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
rod
Posts: 21
Joined: Mon Feb 10, 2014 7:42 am

Interrupt gates vs. trap gates (interrupts disabled/enabled)

Post by rod »

As of now, I can think of different approaches for a kernel's interrupt behaviour (when entering kernel mode because of an interrupt):

- Having interrupts disabled (interrupt gates).
- Having interrupts enabled (trap gates).
- Having interrupts enabled only for syscalls.
- Starting with interrupts disabled and enable them in some specific places.
- Starting with interrupts enabled and disable them in some specific places.
- Other combinations of the above.

Which of these listed approaches is more common/recommended/flexible?

I came to this while thinking about implementing TLB shootdowns. I realised a limitation of my kernel (microkernel, x86_64, SMP) as it currently has interrupts always disabled in kernel mode (interrupt gates, instead of trap gates) and it would not receive IPIs in kernel mode (actually it would not receive any interrupt like timers, keyboard, ..., but, at the moment, kernel processing is fast, and it seemed sufficient to receive them once it switched back to user mode).

I wonder whether it is possible to develop a kernel with interrupts always disabled in kernel mode, or instead it is generally required to enable them once sufficiently advanced features are to be implemented, like TLB shootdowns, etc.

If I decide to start enabling interrupts then I should think about interrupt nesting, and where interrupts would have to be disabled temporarily...
nullplan
Member
Member
Posts: 1790
Joined: Wed Aug 30, 2017 8:24 am

Re: Interrupt gates vs. trap gates (interrupts disabled/enab

Post by nullplan »

The common recommendation is to use interrupt gates so that the assembly part of the interrupt handler can fully save all necessary registers to the stack before conditionally enabling interrupts (if "enough" stack space is left for the kernel, for some measure of "enough"). This way, interrupts are enabled most of the time.

Another common recommendation is to do as little work as possible in the actual interrupt handler. For instance, if you're writing a NIC driver, you will probably get an interrupt for "package received". In that interrupt handler, you only acknowledge the interrupt so that the card no longer asserts that line and you can send EOI, and then you allow a kernel task to be scheduled, which will then read out the package.

The reason for this indirection is that the scheduler is free to schedule the NIC package reader thread when all the higher-priority tasks are done or busy, but it can't do anything about interrupts.

So yeah, generally start with interrupts disabled and enable them once the state has been saved to stack and "enough" stack is left over.
Carpe diem!
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Interrupt gates vs. trap gates (interrupts disabled/enab

Post by rdos »

In a single core kernel it's possible to heavily overuse interrupt disable to provide locks and similar, but this fails in an SMP design. Generally, in an SMP design, disabling interrupts will not protect code, and if there are race conditions with interrupts, then there is a need for spinlocks or similar. Disabling interrupts in one core will still allow another core to execute the same code.
nullplan
Member
Member
Posts: 1790
Joined: Wed Aug 30, 2017 8:24 am

Re: Interrupt gates vs. trap gates (interrupts disabled/enab

Post by nullplan »

rdos wrote:In a single core kernel it's possible to heavily overuse interrupt disable to provide locks and similar, but this fails in an SMP design. Generally, in an SMP design, disabling interrupts will not protect code, and if there are race conditions with interrupts, then there is a need for spinlocks or similar. Disabling interrupts in one core will still allow another core to execute the same code.
I think you might have missed the topic a bit, but since we are on the subject of locking: Never use locks to protect code, but rather to protect data. Then you also won't make the mistake of thinking a "cli" in the right place will save you a lock.

Plus, if you use a good locking implementation, locking an uncontended lock takes no time flat. Like the locks musl uses internally: Bloody good design, and an uncontended lock is a single CAS instruction.
Carpe diem!
Post Reply