Page 2 of 2

Re: Old vs New, are we really improving?

Posted: Sun Jan 20, 2013 6:33 pm
by Brendan
Hi,
OSwhatever wrote:The question is how virtual memory scales with larger memories. As memory becomes larger we would need up to 5 levels of page tables which increase the time spent if there is a TLB miss and also over all complexity. Inverted page tables lose their advantage as memory size goes up and the regular hierarchical page table is more attractive again. As I see it, page tables feels like a flash translation layer which is a workaround in order to hide drawbacks of a technology.
If the size of virtual memory increases but the size of pages, page tables, etc stays the same; then we would need more levels of paging structures (e.g. 6 levels for a full 64-bit virtual addresses rather than the 4 levels for 48-bit virtual addresses we've got now). However; if the size of virtual memory and the size of pages, page tables, etc increase, then we might need fewer levels of paging structures.

For an example, imagine if virtual addresses are 64-bit, a page is 4096 bytes, a page table is 512 MiB and a page directory is 512 MiB. In this case (assuming 8-byte paging structure entries) you end up with:
  • Bits 0 to 11 - offset in page
  • Bits 37 to 12 - index in page table for page table entry
  • Bits 63 to 38 - index in page directory for page directory entry
More realistic would be "3 and a bit" levels, with 4096 byte pages, 64 KiB page tables, 64 KiB page directories, 64 KiB page directory pointer tables; where 16 MSRs are used to store the addresses of up to 16 page directory pointer tables (and CR3 isn't used). In this case you'd end up with:
  • Bits 0 to 11 - offset in page
  • Bits 27 to 12 - index in page table for page table entry
  • Bits 43 to 28 - index in page directory for page directory entry
  • Bits 59 to 44 - index in PDPT for PDPT entry
  • Bits 60 to 63 - selects which PDPT to use
OSwhatever wrote:Virtual memory do give us a lot useful features however I'm not sure this is the way to go in the future. Per object protection does reduce development times and reduce bugs so why not use it all the way with OS support.
Because protection checks have overhead, and unnecessary protection checks (e.g. when you're not debugging) have unnecessary overhead. The way of the future is likely to be paging and nothing else in hardware; with tools that implement checks in software used for debugging (e.g. virtual machines like valgrind, or compilers for "optionally managed languages" that allow checks to be inserted or omitted via. a "-debug" command line argument).
OSwhatever wrote:Virtual memory is so trenched in now that we lack a lot of tools, both SW and HW.
Paging is so much better than segmentation that the tools (SW and HW) that we already had have been deprecated rather than improved.

Note: "Virtual memory" applies to anything where software doesn't use physical addresses directly and the application's virtual addresses are translated in some way. If an OS uses segmentation and doesn't use paging at all, then the OS uses virtual memory (an application's virtual addresses are "translated in some way" by adding a segment base to the offset). For an example (from the wikipedia page about virtual memory) "In 1961, the Burroughs Corporation independently released the first commercial computer with virtual memory, the B5000, with segmentation rather than paging."


Cheers,

Brendan