How are inverted page tables implemented?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
AndrewAPrice
Member
Member
Posts: 2298
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

How are inverted page tables implemented?

Post by AndrewAPrice »

I was reading through the Microsoft documentation on memory management and it mentions Inverted Page Tables. Is it practical to implement IPT on x86?

How do you tell the CPU that this virtual address maps to this physical address? The documentation I read online goes into talking about the hashing and lookup mechanisms, but nothing about how to implement it on x86.

You'd still need the processor's paging structures, but there would be only one set? I'm imagining that in long mode, you'd have a single PML4 that'd get cleared on context switches, so every memory address would cause a page fault, and you'd rebuild the paging structures each time slice?
My OS is Perception.
sj95126
Member
Member
Posts: 151
Joined: Tue Aug 11, 2020 12:14 pm

Re: How are inverted page tables implemented?

Post by sj95126 »

Whether to use inverted page tables or not is entirely an issue of whether the CPU requires it. Certain architectures (SPARC, POWER, Itanium) use them instead of traditional page tables. The article you reference mentions IPTs in the context of porting Windows to the IA-64 (Itanium) architecture. Itanium uses IPTs; it wasn't Microsoft's choice.

If you wanted to add something IPT-like to your OS, it would be for your own convenience and would have to be implemented entirely in software. You'd still have to use the regular paging mechanism on x86; the CPU has no concept of an IPT and there's no way to point it at one.
User avatar
AndrewAPrice
Member
Member
Posts: 2298
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: How are inverted page tables implemented?

Post by AndrewAPrice »

sj95126 wrote:Whether to use inverted page tables or not is entirely an issue of whether the CPU requires it.
Thanks. I was curious about the space savings arguments of the paging structures. If it's not practical for x86, so be it.
My OS is Perception.
sj95126
Member
Member
Posts: 151
Joined: Tue Aug 11, 2020 12:14 pm

Re: How are inverted page tables implemented?

Post by sj95126 »

As a general answer, yes, it does save memory. There is only a single system-wide page table (or one per memory domain), as opposed to one per process, and it takes up less space because it's only sized to reference each physical frame of memory, instead of a potentially large linear space. The tradeoff is that it can take longer to find something, because you're still performing a linear->physical translation, but the table is sorted by physical address. Hashing methods can improve the performance somewhat. On the plus side, you don't have to change page tables during task switches, so there's less of a TLB hit.
User avatar
AndrewAPrice
Member
Member
Posts: 2298
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: How are inverted page tables implemented?

Post by AndrewAPrice »

sj95126 wrote:On the plus side, you don't have to change page tables during task switches, so there's less of a TLB hit.
This would require architectural support? Trying to implement IPT on x86 would require clearing the TLB between task switches?
My OS is Perception.
sj95126
Member
Member
Posts: 151
Joined: Tue Aug 11, 2020 12:14 pm

Re: How are inverted page tables implemented?

Post by sj95126 »

AndrewAPrice wrote:Trying to implement IPT on x86 would require clearing the TLB between task switches?
No, exactly the opposite. With IPT, there's a single page table shared by everything. If x86 used IPT, you wouldn't have to change CR3 during a task switch, because you don't need to change page tables, ever.

Now, a task switch might result in a high number of TLB evictions in a short period of time, but that's possible regardless of the paging type.
User avatar
AndrewAPrice
Member
Member
Posts: 2298
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: How are inverted page tables implemented?

Post by AndrewAPrice »

sj95126 wrote:If x86 used IPT, you wouldn't have to change CR3 during a task switch, because you don't need to change page tables, ever.
How would you implement this in an x86 OS? If you never changed CR3, or invalidated the entire PML4 (long mode x86) as I suggested in the OP, this would imply every task on the system shares the same address space?

How do I let the processor know that "0x1234000" no longer refers to PID 4's 0x1234000 but PID 5's 0x1234000? Or, is there just a single flat address space (would there still be memory protection between processes?)
My OS is Perception.
nullplan
Member
Member
Posts: 1760
Joined: Wed Aug 30, 2017 8:24 am

Re: How are inverted page tables implemented?

Post by nullplan »

AndrewAPrice wrote:How would you implement this in an x86 OS?
You don't. x86 does not support IPT. As has been pointed out previously.

I can tell you how it works for PowerPC 32: In PowerPC the address you generate in the CPU registers when you access memory is called the Effective Address. This address is then translated to the Virtual Address, and that is then translated to the corresponding Physical Address. The translation from Effective to Virtual Address effectively replaces the leading four bits with twenty-four bits from the Segment Register (a Segment in PowerPC parlance is a naturally aligned chunk of 256 MB of Effective memory. There are sixteen Segment Registers.)

The translation from Virtual to Physical address does not matter to this discussion. The point is, the virtual address gets augmented with up to 24 bits of the operating system's choosing before translation to physical memory is attempted. So what you can do there is to prepend the process ID to the virtual addresses generated in user space. That way, every process is separated in virtual address space by 4GB distance, and so they cannot interfere with each other. You never update the position of the hashed page table, because you can freely mix page table entries for different processes in the same page table. This is not possible in x86. The most you can do with x86 is use the PCID feature.
Carpe diem!
rdos
Member
Member
Posts: 3269
Joined: Wed Oct 01, 2008 1:55 pm

Re: How are inverted page tables implemented?

Post by rdos »

I think it is possible to "segment" long mode into 64k 4GB areas, and thus use a 16-bit process-ID as the upper 16 bits. However, the problem is that there will be no forced separation of the address spaces. Another issue is that then each process only has 4G linear address space, which is no better than 32-bit mode with real segmentation.
thewrongchristian
Member
Member
Posts: 422
Joined: Tue Apr 03, 2018 2:44 am

Re: How are inverted page tables implemented?

Post by thewrongchristian »

AndrewAPrice wrote:I was reading through the Microsoft documentation on memory management and it mentions Inverted Page Tables. Is it practical to implement IPT on x86?

How do you tell the CPU that this virtual address maps to this physical address? The documentation I read online goes into talking about the hashing and lookup mechanisms, but nothing about how to implement it on x86.

You'd still need the processor's paging structures, but there would be only one set? I'm imagining that in long mode, you'd have a single PML4 that'd get cleared on context switches, so every memory address would cause a page fault, and you'd rebuild the paging structures each time slice?
As pointed out by others, hardware inverted page tables are a feature of the hardware architecture. It's not something x86 can implement.

A big benefit being able to bound the size of the table regardless of the virtual space used.

A big problem is that because the table is indexed by physical page number, you can't directly map a VPN to a PPN, and instead have to hash the VPN and walk PPN entries that match the hash, so hardware walking is a bit more involved than, but like the multiple access required for a hierarchical page table (as used on x86), this extra latency is hidden by the TLB.

A good comparison of various processors can be found here:

https://drum.lib.umd.edu/handle/1903/7465

This is originally an IEEE publication from 1998, so predates amd64, for example, but also include Alpha MMU.

It also includes PA-RISC's (IMO whacky) 96-bit VA MMU.
Post Reply