mapping kernel in user address space vs not mapping it
- iocoder
- Member
- Posts: 208
- Joined: Sun Oct 18, 2009 5:47 pm
- Libera.chat IRC: iocoder
- Location: Alexandria, Egypt | Ottawa, Canada
- Contact:
mapping kernel in user address space vs not mapping it
Hello
It is good idea to map your kernel in every user process [like higher half kernel model], but it might be a good implementation when your kernel is not mapped in user space, so the process does not concern where the kernel is mapped... and the whole 4GB address space is available for 32-bit process.
I just like to know your opinions... which implementation do you think it is better? to map the kernel like higher half kernel model, or to make the whole 4GB available? and why??
It is good idea to map your kernel in every user process [like higher half kernel model], but it might be a good implementation when your kernel is not mapped in user space, so the process does not concern where the kernel is mapped... and the whole 4GB address space is available for 32-bit process.
I just like to know your opinions... which implementation do you think it is better? to map the kernel like higher half kernel model, or to make the whole 4GB available? and why??
- iocoder
- Member
- Posts: 208
- Joined: Sun Oct 18, 2009 5:47 pm
- Libera.chat IRC: iocoder
- Location: Alexandria, Egypt | Ottawa, Canada
- Contact:
Re: mapping kernel in user address space vs not mapping it
issues like???berkus wrote:You may run into issues
- Griwes
- Member
- Posts: 374
- Joined: Sat Jul 30, 2011 10:07 am
- Libera.chat IRC: Griwes
- Location: Wrocław/Racibórz, Poland
- Contact:
Re: mapping kernel in user address space vs not mapping it
...like the fact you cannot execute code that isn't currently mapped?
Reaver Project :: Repository :: Ohloh project page
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
- iocoder
- Member
- Posts: 208
- Joined: Sun Oct 18, 2009 5:47 pm
- Libera.chat IRC: iocoder
- Location: Alexandria, Egypt | Ottawa, Canada
- Contact:
Re: mapping kernel in user address space vs not mapping it
Well, i am still able to map ISR in user mode...and it will change cr3. so I am still able to execute kernel code after a system call...Griwes wrote:...like the fact you cannot execute code that isn't currently mapped?
- Griwes
- Member
- Posts: 374
- Joined: Sat Jul 30, 2011 10:07 am
- Libera.chat IRC: Griwes
- Location: Wrocław/Racibórz, Poland
- Contact:
Re: mapping kernel in user address space vs not mapping it
Right, but you introduce the need to do a cr3 switch on every syscall done. Anyway, most people nowadays already got their x86-64 processors, so that's not a big problem.
Reaver Project :: Repository :: Ohloh project page
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
<klange> This is a horror story about what happens when you need a hammer and all you have is the skulls of the damned.
<drake1> as long as the lock is read and modified by atomic operations
Re: mapping kernel in user address space vs not mapping it
Most hobby kernel do not have huge code/data size, I expect a basic kernel alone has footprint less than 128MiB.
so it does not matter much to squeeze a few MiB from kernel code out of the address space.
Then, the problem is do you want separated address space for global resources like:
* those used by devices and display (eg. micro-kernel design).
* used by cache server
* other global stuff and IPC shared resources
It all depends on your design choice. For me, my kernel is at the top 256MiB and I have no concern to squeeze anything out of address space.
For 32-bit Windows I believe it's 2GiB user and 2GiB kernel/system, and optionally support 3+1 with /3GB boot parameter. (information source: http://msdn.microsoft.com/en-us/windows ... e/gg487508)
so it does not matter much to squeeze a few MiB from kernel code out of the address space.
Then, the problem is do you want separated address space for global resources like:
* those used by devices and display (eg. micro-kernel design).
* used by cache server
* other global stuff and IPC shared resources
It all depends on your design choice. For me, my kernel is at the top 256MiB and I have no concern to squeeze anything out of address space.
For 32-bit Windows I believe it's 2GiB user and 2GiB kernel/system, and optionally support 3+1 with /3GB boot parameter. (information source: http://msdn.microsoft.com/en-us/windows ... e/gg487508)
Re: mapping kernel in user address space vs not mapping it
You at least need the interrupt and exception handlers mapped all the time, and those should reside in kernel and should not be possible to access from applications.
I use 512MB for my kernel (leaving 3.5 GB for applications). Most of the space is for kernel "malloc", and filesystem buffers. The kernel & device-drivers only have 16MB reserved (this might actually need to be changed eventually).
I use 512MB for my kernel (leaving 3.5 GB for applications). Most of the space is for kernel "malloc", and filesystem buffers. The kernel & device-drivers only have 16MB reserved (this might actually need to be changed eventually).
-
- Member
- Posts: 141
- Joined: Thu Jun 17, 2010 2:36 am
Re: mapping kernel in user address space vs not mapping it
You would would absolutely destroy peformance of your syscalls. Writing to CR3 clears the entire TLB. That means page walks for every page accessed to satisfy the syscall, and once you get back to user land, page walks for every unique page accessed there as well. Some type of replacement scheme with no CR3 write would require you to flush the entire memory range you want your kernel to occupy, map it, then map the user's pages back in, and reflush it. You'd also have to have some rudimentary memory manager mapped in every process (along with the IDT\ISR's).
- AndrewAPrice
- Member
- Posts: 2300
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: mapping kernel in user address space vs not mapping it
Do you really need a whole 1GB of your virtual address space dedicated to the kernel?
For a monolithic kernel that is keeping frame buffers, hundreds of high resolution textures, etc, allocated in kernel memory, then 1GB will be tight.
For most hobby kernels (and microkernels), I'd question it's efficiency if it's using more than 50mb of RAM. (Unless you had a reason to more - e.g. some awesome caching system.)
For a monolithic kernel that is keeping frame buffers, hundreds of high resolution textures, etc, allocated in kernel memory, then 1GB will be tight.
For most hobby kernels (and microkernels), I'd question it's efficiency if it's using more than 50mb of RAM. (Unless you had a reason to more - e.g. some awesome caching system.)
My OS is Perception.
Re: mapping kernel in user address space vs not mapping it
We are talking about address space, not physically allocated ram.MessiahAndrw wrote:Do you really need a whole 1GB of your virtual address space dedicated to the kernel?
For a monolithic kernel that is keeping frame buffers, hundreds of high resolution textures, etc, allocated in kernel memory, then 1GB will be tight.
For most hobby kernels (and microkernels), I'd question it's efficiency if it's using more than 50mb of RAM. (Unless you had a reason to more - e.g. some awesome caching system.)
50MiB address space is tight for isolating different zones within kernel.
Re: mapping kernel in user address space vs not mapping it
Even if the kernel doesn't typically use 1GB (or 512MB as in my case), it must reserve enough address space (not physical memory) for it to work efficiently with the disc (which needs linear address space for buffers) and any other objects it manages in kernel space. A typical RDOS configuration has about 310MB free linear memory for page-based allocations and 120MB for byte-based allocations. Then it also reserves a few other areas, like LFB (2x4MB), page table mirrors and alike. The code itself can use up to 16MB.MessiahAndrw wrote:Do you really need a whole 1GB of your virtual address space dedicated to the kernel?
For a monolithic kernel that is keeping frame buffers, hundreds of high resolution textures, etc, allocated in kernel memory, then 1GB will be tight.
For most hobby kernels (and microkernels), I'd question it's efficiency if it's using more than 50mb of RAM. (Unless you had a reason to more - e.g. some awesome caching system.)
- AndrewAPrice
- Member
- Posts: 2300
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: mapping kernel in user address space vs not mapping it
This is true, but I've always focused on microkernels and trying to give my programs as much virtual addressing space as possible.bluemoon wrote:We are talking about address space, not physically allocated ram.
50MiB address space is tight for isolating different zones within kernel.
Now back to the OT's question. You could try to fit the majority of your kernel's code inside of a single page directory entry (4 MB on x86). Since this is a single entry that will never change and exist for all processes, you never have to worry about updating other processes' page maps.
Use the remaining space inside of the kernel's page to cache enough data to perform the majority of context switches and IPC calls. Store the rest of the data in pages that are unmapped upon context switching back to the process.
My OS is Perception.
Re: mapping kernel in user address space vs not mapping it
I've just restructured my kernel's address layout, and really happy with the mind boggling address space. If you really have some memory hungry application, run it on 64bit OS, and don't waste time to squeeze a few MB out of your 32bit kernel
Code: Select all
// Physical Address of kernel (1MB + 4096 ELF Header)
// ----------------------------------------------
#define KADDR_KERNEL_PMA (0x00100000 + 4096)
// Reserved: -0.5GB ~ TOP
// ----------------------------------------------
// MMU Page Allocator (-1GB ~ -0.5GB)
// for 256GB memory there is 67108864 x 4K pages, requires 512 MiB
// ----------------------------------------------
#define KADDR_MMU_STACK (0xFFFFFFFFC0000000)
// Kernel Zone (-2GB ~ -1GB)
// ----------------------------------------------
#define KADDR_ZERO_VMA (0xFFFFFFFF80000000)
#define KADDR_KERNEL_VMA (KADDR_ZERO_VMA + KADDR_KERNEL_PMA)
#define KADDR_BOOTDATA (KADDR_ZERO_VMA + 0x0600)
#define KADDR_PMA(x) (((uintptr_t)(x)) - KADDR_ZERO_VMA)
// Global Resource (-3GB ~ -2GB)
// Frame buffer, DMA Buffers, MMIO
// ----------------------------------------------
#define KADDR_GLOBAL_RESOURCE (0xFFFFFFFF40000000)
// Kernel Modules (-4GB ~ -3GB)
// ----------------------------------------------
#define KADDR_DRIVER_VMA (0xFFFFFFFF00000000)
// MMU recursive mapping (-512GB ~ -256GB)
// ----------------------------------------------
#define MMU_RECURSIVE_SLOT (510UL)
#define KADDR_MMU_PT (0xFFFF000000000000UL + (MMU_RECURSIVE_SLOT<<39))
#define KADDR_MMU_PD (KADDR_MMU_PT + (MMU_RECURSIVE_SLOT<<30))
#define KADDR_MMU_PDPT (KADDR_MMU_PD + (MMU_RECURSIVE_SLOT<<21))
#define KADDR_MMU_PML4 (KADDR_MMU_PDPT + (MMU_RECURSIVE_SLOT<<12))
// User-space address layout (1MB ~ 256GB)
// ----------------------------------------------
#define APPADDR_PROCESS_HEADER (0x00001000)
// APPADDR_PROCESS_START is obsoleted, use vaddr proposed by elf header, with validations.
// #define APPADDR_PROCESS_START (0x00100000)
#define APPADDR_PROCESS_STACK (0x7FC00000) // 2GiB-4MiB
// For spawning process
#define KADDR_CLONEPD (APPADDR_PROCESS_STACK + 4096)
#define KADDR_NEWPROCESSINFO (KADDR_CLONEPD + 4096)
// Testing
// ----------------------------------------------
#define KADDR_INITRD (0xFFFFFFFF80C00000)
#define KADDR_DISPLAY (0xFFFFFFFF8F000000)