"NT disallows overcommiting memory"
"NT disallows overcommiting memory"
I've heard people say that Windows NT disallows overcommiting memory (unlike Linux, which allows some overcommiting.) I'm confused what does it mean though. If Windows doesn't allow overcommiting memory, when does it swap pages to the disk? I always thought that memory swap occurs whenever a program tries to commit virtual memory, but there isn't enough physical memory. Correct me if I'm wrong.
Thanks
Thanks
Re: "NT disallows overcommiting memory"
Memory overcommiting is a feature that hypervisors use. It doesn't have to do with swapping, and actually is designed to limit the amount of swapping done on a server.
Re: "NT disallows overcommiting memory"
Thanksiansjack wrote:https://itectec.com/superuser/will-microsoft-windows-10-overcommit-memory/
-
- Member
- Posts: 5568
- Joined: Mon Mar 25, 2013 7:01 pm
Re: "NT disallows overcommiting memory"
I suggest replacing your link with the actual source.iansjack wrote:(web scraper garbage)
Re: "NT disallows overcommiting memory"
when number of free pages hits some threshold? it then either pages out standby pages to the backing files or trims working sets. also, committing virtual memory barely can cause paging out, since never all the committed memory are really allocated (that is the mapping between physical memory established).angods wrote:I've heard people say that Windows NT disallows overcommiting memory (unlike Linux, which allows some overcommiting.) I'm confused what does it mean though. If Windows doesn't allow overcommiting memory, when does it swap pages to the disk? I always thought that memory swap occurs whenever a program tries to commit virtual memory, but there isn't enough physical memory. Correct me if I'm wrong.
Thanks
PS. why one would allow to commit amount of memory, one wouldn't be able to back with any physical storage, even of the second level?
Re: "NT disallows overcommiting memory"
Because hypervisors tend to allocate more memory then they ever use, hence it allows lower power servers to run virtual machines more easily.
-
- Member
- Posts: 426
- Joined: Tue Apr 03, 2018 2:44 am
Re: "NT disallows overcommiting memory"
I can see virtual machine monitors overcommitting secondary storage, because mass storage is often under-utilised.nexos wrote:Because hypervisors tend to allocate more memory then they ever use, hence it allows lower power servers to run virtual machines more easily.
But main memory for virtual machines? It makes no sense to overcommit that.
Re: "NT disallows overcommiting memory"
What do you mean? It could be very useful if someone wants to run a virtual machine with 2GB of RAM on a 4GB system with 2GB swap, while the server itself may even have to run multiple virtual machines. Then, because the VMs will likely have uneven memory usage patterns and not use most memory at once, they can all run happily.But main memory for virtual machines? It makes no sense to overcommit that.
Of course, that example is kind of lame; it would more likely be useful on big datacenter servers running hundreds of VMs on 128GB of memory.
-
- Member
- Posts: 426
- Joined: Tue Apr 03, 2018 2:44 am
Re: "NT disallows overcommiting memory"
On the contrary, your datacenter example is precisely an example of what I would not recommend.nexos wrote:What do you mean? It could be very useful if someone wants to run a virtual machine with 2GB of RAM on a 4GB system with 2GB swap, while the server itself may even have to run multiple virtual machines. Then, because the VMs will likely have uneven memory usage patterns and not use most memory at once, they can all run happily.But main memory for virtual machines? It makes no sense to overcommit that.
Of course, that example is kind of lame; it would more likely be useful on big datacenter servers running hundreds of VMs on 128GB of memory.
Most OSes, tend to maximise RAM usage, as RAM is an expensive resource that is wasted if it's not used. Hence, file caches expand to use remaining unused memory. If that file cache is instead overcommitted on the underlying hypervisor machine, the memory the guest OS thinks is being put to good use caching file data is instead simply being cached (and copied) at two levels. Very sub-optimal.
If you have a 128GB RAM host memory, you're best off running the VMs with less memory and not overcommitting, and leaving it to the guest OSes to utilise that memory as they see fit.
Re: "NT disallows overcommiting memory"
Overcomitting memory is certainly not a hypervisor-focused feature. It is a core memory management feature that allows virtual mappings that exceed the size of physical memory. For an actual use case, consider the address sanitizer that uses multi-TiB virtual memory mappings (for its shadow memory) that are only partially backed by physical RAM.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
-
- Member
- Posts: 510
- Joined: Wed Mar 09, 2011 3:55 am
Re: "NT disallows overcommiting memory"
Overcommitting means that when a program makes a system call to allocate memory, the OS does not return a failure status to the program even if the total amount of memory that all the programs on the system have asked to allocate exceeds the total amount of RAM plus swap on the system. The OS can do this because even when a program has allocated memory and the OS has returned a pointer to the allocated memory, that pointer just points to a location within the program's virtual address space, and the OS need not actually back that location with an actual page of RAM (or swap) until the program actually tries to access the page. This is useful because programs often allocate memory that they never touch, so not actually assigning a physical page to a virtual address until the program actually touches it prevents programs that allocate more than they need from tying up memory that other programs might need. It's controversial because the OS is telling programs that it has memory for them before it's sure that it can make good on that promise.angods wrote:I've heard people say that Windows NT disallows overcommiting memory (unlike Linux, which allows some overcommiting.) I'm confused what does it mean though. If Windows doesn't allow overcommiting memory, when does it swap pages to the disk? I always thought that memory swap occurs whenever a program tries to commit virtual memory, but there isn't enough physical memory. Correct me if I'm wrong.
Thanks
Re: "NT disallows overcommiting memory"
Yes. I recall Linux 2.0 allowed practically unlimited overcommitting with the excuse that it's impossible to predict how memory use will change in a multitasking operating system. I think they meant something like this: If process A tries to allocate more memory than is available, it shouldn't be prevented from doing so because memory may be freed by other processes before process A actually uses its memory. In practice, I saw a lot of processes killed due to segmentation faults on my 4MB 486 with 8MB swap. I naturally bought myself a much more powerful machine, but then an OOM-killer was implemented which, in its early form, always killed the X server. (The X server never segfaulted...)linguofreak wrote:Overcommitting means that when a program makes a system call to allocate memory, the OS does not return a failure status to the program even if the total amount of memory that all the programs on the system have asked to allocate exceeds the total amount of RAM plus swap on the system. The OS can do this because even when a program has allocated memory and the OS has returned a pointer to the allocated memory, that pointer just points to a location within the program's virtual address space, and the OS need not actually back that location with an actual page of RAM (or swap) until the program actually tries to access the page. This is useful because programs often allocate memory that they never touch, so not actually assigning a physical page to a virtual address until the program actually touches it prevents programs that allocate more than they need from tying up memory that other programs might need. It's controversial because the OS is telling programs that it has memory for them before it's sure that it can make good on that promise.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
-
- Member
- Posts: 426
- Joined: Tue Apr 03, 2018 2:44 am
Re: "NT disallows overcommiting memory"
Ouch! By the time Linux 2.0 came out (mid-1996,) I'd graduated to a Pentium Pro with 32MB RAM. Luckily it was my work experience year at Uni, so I was actually earning, and picked up the Pentium Pro cheap at a computer fair. However, I needed a new motherboard to host it, and the only motherboard I could find was ATX based, so I needed a new case as well. That cheap PPro turned out quite expensive.eekee wrote: Yes. I recall Linux 2.0 allowed practically unlimited overcommitting with the excuse that it's impossible to predict how memory use will change in a multitasking operating system... In practice, I saw a lot of processes killed due to segmentation faults on my 4MB 486 with 8MB swap.
But, wow, it was fast! I think I'd come from one of these UMC monstrosities, which was slow and incompatible with earlier Linux 1.2 based distros I'd tried.