RDOS (was Rewrite from Scratch)

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

RDOS (was Rewrite from Scratch)

Post by rdos »

Brendan wrote: Now compare "far better product than they had before the rewrite" to "my 32-bit protected mode kernel grew larger than 64 KiB, so I had to shift the scheduler out of the kernel".
You seem to have a obsession with bitness and memory models. In a x86 OS, it clearly doesn't matter which default bitness the kernel uses. That is kind of the nice thing with IA32. You don't need to become fixated to 16-bit, 32-bit segmented or flat memory models. It works perfectly well to mix them whatever way you like. In fact, 16-bit code is smaller (and thus has better cache locality) than 32-bit and 64-bit code, and thus actually executes faster (fewer cache misses). About the only reason to change bitness in device-drivers in RDOS is to be able to use C in 32-bit but not 16-bit devices.

For me, a good product is a stable product. Bitness doesn't matter a thing. The same goes for a production release. If it is elegant, uses the latest memory models and processor mode doesn't matter a bit if it crashes randomly. Speed also is much less important than stability. It is nice if an OS uses the best algorithms to achieve the best performance, but if it isn't stable, this feature is not worth anything.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Rewrite from Scratch

Post by Brendan »

Hi,
rdos wrote:For me, a good product is a stable product. Bitness doesn't matter a thing. The same goes for a production release. If it is elegant, uses the latest memory models and processor mode doesn't matter a bit if it crashes randomly. Speed also is much less important than stability. It is nice if an OS uses the best algorithms to achieve the best performance, but if it isn't stable, this feature is not worth anything.
For me, a good product has a clean design that doesn't look like a dog ate it. If you've got a good/clean design it's easy to improve/fix stability and performance problems; but if you've only got stable code it's very hard to fix a horrendously bad design.

If your OS could talk, it'd beg for a quick release from the anguish of its own existence. It is a far stronger argument for "rewrite is better than incremental change" than anything I've said and everything I'll ever say.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Rewrite from Scratch

Post by Combuster »

rdos wrote:
Brendan wrote: Now compare "far better product than they had before the rewrite" to "my 32-bit protected mode kernel grew larger than 64 KiB, so I had to shift the scheduler out of the kernel".
You seem to have a obsession with bitness and memory models.
Your excuse is laughable since it's much closer to hypocricy than the truth. The entire issue and its fix reeks of fixing symptoms of unmaintainable crud by making it more unmaintainable crud, and the only reason I haven't posted a big red sign on it previously is because I wanted to let you realize I'm not the only one. I mean, you already use a fully segment-aware compiler, hence it should perfectly support far calls in code segments.
Brendan wrote:If your OS could talk, it'd beg for a quick release from the anguish of its own existence.
QFT.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Rewrite from Scratch

Post by rdos »

Combuster wrote:I mean, you already use a fully segment-aware compiler, hence it should perfectly support far calls in code segments.
It does. In device-drivers, but not in the startup-code (which I call "kernel"). The kernel is still using the DOS-executable format (made for real-mode), and thus doesn't support segment relocations of any sort. That is a historical bagage, which is built-into the boot-loader which expects to see this format. Because it is in the boot-loader, there is not really much I can do about this since I cannot easily update the boot-loader of existing installations. About the only solution is to move everything out, so it becomes a stub, which I have started to do.

Besides, I don't use multiple code segments in device-drivers either. I use the compact memory model in both 16 and 32 bit, which has a single code segment and multiple data segments (one being the default data segment). Thus, procedures are near and pointers are far by default, except for accesses to dgroup which uses DS. This is most convinient since I give the linker both the protected mode code and data segment of the device-driver, so it needs no load-time relocations.
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Rewrite from Scratch

Post by rdos »

Brendan wrote: For me, a good product has a clean design that doesn't look like a dog ate it. If you've got a good/clean design it's easy to improve/fix stability and performance problems; but if you've only got stable code it's very hard to fix a horrendously bad design.
In my experience, badly designed code is always unstable, at least when modified a lot, so I think your claims are unwarranted. RDOS is a well-designed OS that uses a mixed segmented memory model and it is stable. It was intended to be mixed-memory model. Besides, most device-drivers could easily be changed to 32-bit format, but there is no sense in doing it as they would become larger, and would still operate exactly the same way. The issue is that regardless of default bitness, syscalls are defined with 48-bit pointers, which supports any IA32 memory model.

And long mode is not supported on the typical target system, so there is no sense in rewriting anything for a platform that is not even used today.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Rewrite from Scratch

Post by Brendan »

Hi,

Ok, what I said was harsh and wasn't very constructive. I still think it was justified - when working with a project for a very long time it's probably natural to become accustomed to the project's eccentricities and start having trouble noticing how bad the overall design has become.

There are things about rdos that are admirable (managing to get it used in commercial products, the perseverance it would've taken, etc). These things have nothing to do with the OS's design though.

In an attempt to be more constructive, I've created a partial list of design flaws.

16-bit Kernel Code

For a 32-bit or 64-bit CPU, 16-bit instructions have false dependencies on the register's previous value. For example, "mov ax,1234" depends on the previous value of EAX/RAX. This means that the CPU can stall waiting for the previous value to of EAX/RAX, which limits the CPU's ability to do "out of order" processing and harms performance. Fixing this problem in 16-bit code requires size override prefixes. Consider this example:

Code: Select all

    mov ax,[foo]        ;Depends on previous value of EAX/RAX
    shl ax,1            ;Depends on previous instruction
    mov [bar],ax        ;Depends on previous instruction
    mov ax,1234         ;Depends on "shl ax,1"
    sub ax,bx           ;Depends on previous instruction
    mov [somewhere],ax  ;Depends on previous instruction
This example code could be improved by doing:

Code: Select all

    movzx eax,word [foo]  ;Depends on nothing
    shl eax,1             ;Depends on previous instruction
    mov [bar],ax          ;Depends on previous instruction
    mov eax,1234          ;Depends on nothing
    sub ax,bx             ;Depends on previous instruction
    mov [somewhere],ax    ;Depends on previous instruction
For a modern CPU, (with register renaming) this allows the CPU to do instructions in parallel. It may effectively become:

Code: Select all

    movzx eax_v1,word [foo],     mov eax_v2,1234   ;2 instructions in parallel
    shl eax_v1,1,                sub ax_v2,bx      ;2 instructions in parallel
    mov [bar],ax_v1
    mov [somewhere],ax_v2                          ;Writes occur in program order
For 64-bit CPUs, AMD avoided this problem by making sure that modifying the lowest half of a 64-bit register causes the higher half of the register to be zeroed. For example, "mov eax,[foo]" or "mov eax,1234" does not depend on the previous value of RAX because the CPU zeros the highest 32-bits of RAX (the CPU effectively does "movzx rax,dword [foo]" and "movzx rax,dword 1234" automatically).

Next; indexing in 16-bit is more limited/awkward. For example, you can't do "mov dx,[ax*2+bx]" or "mov ax,[sp+4]" (but can do "mov edx,[eax*2+ebx]" or "mov eax,[esp+4]"). These restrictions mean that you end up with less efficient code to do the same thing. This problem can be avoided a bit by using size override prefixes.

The decoders in modern CPUs are tuned for simple instructions. Instructions with prefixes (not just size override prefixes) are less simple and more likely to reduce decoder efficiency. Exact behaviour depends on the specific CPU. Examples include "instructions with multiple prefixes can only be decoded by the first decoder" (Pentium M), and "For pre-decode, prefixes that change the default length of an instruction have a 3 cycle (Sandy Bridge) or 6 cycle penalty (Nehalem)".

For code size (and instruction fetch), in general (for small pieces of code) 32-bit code may be a slightly smaller or slightly larger than the equivalent 16-bit code, and for large pieces of code it averages out to "irrelevant". Given that the code size is irrelevant, 16-bit code just means you get worse performance without size overrides and worse performance with size overrides.


Segmentation

For modern CPUs; segmentation is a disaster. Segment register loads are very expensive due to the need to do some checks (e.g. is it beyond the GDT limit), then fetch the descriptor (including any TLB misses, etc), then do more protection checks. Using different segment registers for different pieces of data also means that you end up using lots of segment override prefixes, which increase code size and causes inefficiency in a modern CPU's "tuned for simple instructions" decoder.

For memory management; segmentation is a disaster. When segmentation and paging are both used together (which is necessary to avoid serious physical address space fragmentation problems) you end up with 2 memory managers - one to manage segments and one to manage pages; where one of them is redundant.

For programmers and tools (compilers, etc); segmentation is a disaster. It's much easier to work with one contiguous space than it is to juggle many small individual areas.

The "advantage" of segmentation is an illusion. Because it fails to catch all "unintended" accesses (and introduces the new possibility of "right offset, wrong segment" bugs) it does nothing more than create a false sense security.


Physical Memory Management

Bitmaps are slow, are harder to avoid scalability problems, and make it harder to support things like page colouring and NUMA optimisations. Double-free detection only matters if your virtual memory management is buggy and therefore shouldn't matter at all. As far as I can tell, the main reason you like bitmaps is that segmentation sucks and makes better approaches hard.


Virtual Memory Management

This was a huge "failed to abstract". It's probably improved a lot since you started attempting to add support for PAE, but it's very likely that it's still bad. The "many segments" way of accessing paging structures is aweful. When you start attempting to support 64-bit it's going to fail miserably (doing half of virtual memory management in protected mode and half in long mode, and frequently switching from protected mode to long mode and destroying all TLBs, etc highlights a phenomenal abundance of "stupid").


Kernel API

Not supporting sane error handling is a severe mistake. It's likely to cause a lot of unnecessary overhead and unsolvable user-space race conditions (e.g. file IO failures despite pre-checks, where the cause of the error vanishes, because some other process did something at the wrong time); such that writing robust applications is a nightmare.

Also; syscalls should never have used pointers of any kind.


Boot Code

The old boot code (which you seem to refuse to update in a "better than superficial" way) has outlived it's usefulness. The "memory map" it provided was always bad. The entire interface between the boot code and the kernel probably needs to be redesigned to remove flaws and make it extensible (and maybe also to prepare for UEFI).


Summary

This is just a beginning. It's like a large crate of 1010 oranges - if you try 10 oranges and find that 7 are bad you just assume that 700 of the oranges that you haven't tried are also bad. If I had all the details for RDOS and had enough time to go through all of the source code, I expect I could write many pages of problems.

Of course the right time to start a rewrite would've been before SMP was added. You could've looked around to see if there were other causes of "scope creep" coming, and realised that you'd want to add ACPI and power management, and PAE and 64-bit, and maybe UEFI (and maybe ARM?) in the near future; and then assessed the original design to see how many old design flaws could also be fixed. You could've come to the conclusion a new "RDOS 2" (with limited backward compatability or no backward compatability) was justified and that very little of the "RDOS 1" code would be useful for "RDOS 2".

You think "incremental change" works, but (even though you've already rewritten most of the kernel at least once already) most of the old problems are still there. Most of the old problems will never be fixed.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Rewrite from Scratch

Post by rdos »

Brendan wrote: Ok, what I said was harsh and wasn't very constructive. I still think it was justified - when working with a project for a very long time it's probably natural to become accustomed to the project's eccentricities and start having trouble noticing how bad the overall design has become.
Certainly, but many of the things that you call flaws are design-features in my opinion.
Brendan wrote:There are things about rdos that are admirable (managing to get it used in commercial products, the perseverance it would've taken, etc). These things have nothing to do with the OS's design though.
I think they have. Some of the same design features have also been implemented in the terminal software, like the no-error code concept, the isolation concept (mostly with OO-design).
Brendan wrote:16-bit Kernel Code

For a 32-bit or 64-bit CPU, 16-bit instructions have false dependencies on the register's previous value. For example, "mov ax,1234" depends on the previous value of EAX/RAX. This means that the CPU can stall waiting for the previous value to of EAX/RAX, which limits the CPU's ability to do "out of order" processing and harms performance. Fixing this problem in 16-bit code requires size override prefixes. Consider this example:

Code: Select all

    mov ax,[foo]        ;Depends on previous value of EAX/RAX
    shl ax,1            ;Depends on previous instruction
    mov [bar],ax        ;Depends on previous instruction
    mov ax,1234         ;Depends on "shl ax,1"
    sub ax,bx           ;Depends on previous instruction
    mov [somewhere],ax  ;Depends on previous instruction
This example code could be improved by doing:

Code: Select all

    movzx eax,word [foo]  ;Depends on nothing
    shl eax,1             ;Depends on previous instruction
    mov [bar],ax          ;Depends on previous instruction
    mov eax,1234          ;Depends on nothing
    sub ax,bx             ;Depends on previous instruction
    mov [somewhere],ax    ;Depends on previous instruction
For a modern CPU, (with register renaming) this allows the CPU to do instructions in parallel. It may effectively become:

Code: Select all

    movzx eax_v1,word [foo],     mov eax_v2,1234   ;2 instructions in parallel
    shl eax_v1,1,                sub ax_v2,bx      ;2 instructions in parallel
    mov [bar],ax_v1
    mov [somewhere],ax_v2                          ;Writes occur in program order
For 64-bit CPUs, AMD avoided this problem by making sure that modifying the lowest half of a 64-bit register causes the higher half of the register to be zeroed. For example, "mov eax,[foo]" or "mov eax,1234" does not depend on the previous value of RAX because the CPU zeros the highest 32-bits of RAX (the CPU effectively does "movzx rax,dword [foo]" and "movzx rax,dword 1234" automatically).
Yes, I know this is a problem, but that's not possible to fix in any easy way, incremental or not. OTOH, it is not a problem on the current hardware platform that RDOS installations run at, as this is not an "optimized CPU".
Brendan wrote: Segmentation

For modern CPUs; segmentation is a disaster. Segment register loads are very expensive due to the need to do some checks (e.g. is it beyond the GDT limit), then fetch the descriptor (including any TLB misses, etc), then do more protection checks. Using different segment registers for different pieces of data also means that you end up using lots of segment override prefixes, which increase code size and causes inefficiency in a modern CPU's "tuned for simple instructions" decoder.

For memory management; segmentation is a disaster. When segmentation and paging are both used together (which is necessary to avoid serious physical address space fragmentation problems) you end up with 2 memory managers - one to manage segments and one to manage pages; where one of them is redundant.

For programmers and tools (compilers, etc); segmentation is a disaster. It's much easier to work with one contiguous space than it is to juggle many small individual areas.

The "advantage" of segmentation is an illusion. Because it fails to catch all "unintended" accesses (and introduces the new possibility of "right offset, wrong segment" bugs) it does nothing more than create a false sense security.
First, device-drivers do not work in multiple-segment environments. They typical work in one code segment and one data segment, and being passed 48-bit pointers. That's why it was rather easy to port ACPI and FreeType to the new device-driver model. C code doesn't need to be aware of segmentation operating in the background, and thus new device-drivers will work in long mode (after recompilation).

Second, applications doesn't use segmentation, they run in flat mode. I might same day support the compact memory model for applications as well, mostly as a debug-tool, but this is currently not functional.
Brendan wrote: Physical Memory Management

Bitmaps are slow, are harder to avoid scalability problems, and make it harder to support things like page colouring and NUMA optimisations. Double-free detection only matters if your virtual memory management is buggy and therefore shouldn't matter at all. As far as I can tell, the main reason you like bitmaps is that segmentation sucks and makes better approaches hard.
The actually implementation might change if it becomes problematic. NUMA is not a problem though, as it is easy to assign ranges for special uses (I already do this for ISA DMA, 32-bit and 64-bit addresses).
Brendan wrote: Virtual Memory Management

This was a huge "failed to abstract". It's probably improved a lot since you started attempting to add support for PAE, but it's very likely that it's still bad. The "many segments" way of accessing paging structures is aweful. When you start attempting to support 64-bit it's going to fail miserably (doing half of virtual memory management in protected mode and half in long mode, and frequently switching from protected mode to long mode and destroying all TLBs, etc highlights a phenomenal abundance of "stupid").
I doubt that. The CR3 reload is needed anyway (destroys TLBs), and the additional mode-switch will not be noticable.
Brendan wrote: Kernel API

Not supporting sane error handling is a severe mistake. It's likely to cause a lot of unnecessary overhead and unsolvable user-space race conditions (e.g. file IO failures despite pre-checks, where the cause of the error vanishes, because some other process did something at the wrong time); such that writing robust applications is a nightmare.
Strongly disagree. Using error-codes in the traditional sense is a disaster, and I know it is because I've seen it in action. There is not a chance that a kernel written from scratch would have ordinary error-returns with error-codes.
Brendan wrote: Also; syscalls should never have used pointers of any kind.
How else do you pass buffers?
Brendan wrote: Boot Code

The old boot code (which you seem to refuse to update in a "better than superficial" way) has outlived it's usefulness. The "memory map" it provided was always bad. The entire interface between the boot code and the kernel probably needs to be redesigned to remove flaws and make it extensible (and maybe also to prepare for UEFI).
I'm free to modify GRUB and UEFI boot-loaders any way I want. It is just the native bootloader that cannot be changed (at least not in running installations).
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Rewrite from Scratch

Post by Brendan »

Hi,
rdos wrote:
Brendan wrote: Ok, what I said was harsh and wasn't very constructive. I still think it was justified - when working with a project for a very long time it's probably natural to become accustomed to the project's eccentricities and start having trouble noticing how bad the overall design has become.
Certainly, but many of the things that you call flaws are design-features in my opinion.
I think you mean that many of the things that many people call flaws are design-features in your opinion and nobody else's. ;)
rdos wrote:
Brendan wrote:16-bit Kernel Code
Yes, I know this is a problem, but that's not possible to fix in any easy way, incremental or not. OTOH, it is not a problem on the current hardware platform that RDOS installations run at, as this is not an "optimized CPU".
If I understand it properly; it'd be easy to switch the kernel to "segmented 32-bit code" as it's mostly just a matter of telling the assembler "bits 32" instead of "bits 16". This wouldn't be a true fix though, as there'd be a lot of code doing 16-bit operations that would (eventually) need to be changed; but it'd be a good start and you'd be able to fix the inefficiencies it causes when you're rewriting pieces over and over and over for other reasons.

For "rewrite from scratch" this problem is solved automatically - the instant you create an empty "RDOS 2" directory it won't contain any 16-bit code.
rdos wrote:
Brendan wrote:Segmentation
First, device-drivers do not work in multiple-segment environments. They typical work in one code segment and one data segment, and being passed 48-bit pointers. That's why it was rather easy to port ACPI and FreeType to the new device-driver model. C code doesn't need to be aware of segmentation operating in the background, and thus new device-drivers will work in long mode (after recompilation).

Second, applications doesn't use segmentation, they run in flat mode. I might same day support the compact memory model for applications as well, mostly as a debug-tool, but this is currently not functional.
So the only thing that actually uses segmentation is the kernel (because drivers and applications don't)? In that case I fail to see why you'd hesitate to rip it out of the kernel too.

Brendan wrote: Physical Memory Management

Bitmaps are slow, are harder to avoid scalability problems, and make it harder to support things like page colouring and NUMA optimisations. Double-free detection only matters if your virtual memory management is buggy and therefore shouldn't matter at all. As far as I can tell, the main reason you like bitmaps is that segmentation sucks and makes better approaches hard.
The actually implementation might change if it becomes problematic. NUMA is not a problem though, as it is easy to assign ranges for special uses (I already do this for ISA DMA, 32-bit and 64-bit addresses).
rdos wrote:
Brendan wrote:Virtual Memory Management

This was a huge "failed to abstract". It's probably improved a lot since you started attempting to add support for PAE, but it's very likely that it's still bad. The "many segments" way of accessing paging structures is aweful. When you start attempting to support 64-bit it's going to fail miserably (doing half of virtual memory management in protected mode and half in long mode, and frequently switching from protected mode to long mode and destroying all TLBs, etc highlights a phenomenal abundance of "stupid").
I doubt that. The CR3 reload is needed anyway (destroys TLBs), and the additional mode-switch will not be noticable.
If a good 64-bit kernel never needs to destroy all TLBs when allocating/freeing pages, and your schizophrenic "frequently changing CPU modes" kernel is constantly destroying all TLBs; then on a range from 10 to 10 (where 10 is "a lot") how much would your OS suck in comparison?

Note: For estimates, I'd expect to be able to allocate or free a page in less than about 150 cycles (including any/all TLB misses caused by invalidation and "multi-CPU TLB shootdown" IPI); while I'd expect your method to have an overhead of several thousand cycles (including hundreds of TLB misses that occur afterwards due to destroying the TLB, and ignoring the cost of playing musical chairs with segment registers, etc).
rdos wrote:
Brendan wrote:Kernel API

Not supporting sane error handling is a severe mistake. It's likely to cause a lot of unnecessary overhead and unsolvable user-space race conditions (e.g. file IO failures despite pre-checks, where the cause of the error vanishes, because some other process did something at the wrong time); such that writing robust applications is a nightmare.
Strongly disagree. Using error-codes in the traditional sense is a disaster, and I know it is because I've seen it in action. There is not a chance that a kernel written from scratch would have ordinary error-returns with error-codes.
I think this was adequately covered in this topic (which can be summarised as 6 pages of different people telling you it's silly for a wide variety of reasons).
rdos wrote:
Brendan wrote:Also; syscalls should never have used pointers of any kind.
How else do you pass buffers?
I meant the syscall itself, not the parameters to syscalls. I was mostly referring to the strange "patch the syscall in the general protection fault handler" thing you do.

rdos wrote:
Brendan wrote:Boot Code
I'm free to modify GRUB and UEFI boot-loaders any way I want. It is just the native bootloader that cannot be changed (at least not in running installations).
Surely the interface between the boot code and the kernel is the same in all 3 cases, and the kernel gets the same information passed to it in the same way, regardless what the boot code (or the boot loader or the firmware) was. If you can't change the native boot code and the other's need to be compatible, then you can't change the others either.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Rewrite from Scratch

Post by rdos »

Brendan wrote: If a good 64-bit kernel never needs to destroy all TLBs when allocating/freeing pages, and your schizophrenic "frequently changing CPU modes" kernel is constantly destroying all TLBs; then on a range from 10 to 10 (where 10 is "a lot") how much would your OS suck in comparison?
You misunderstand. The scheduler only needs to change operating mode when it switches between processes, and when it does that it will reload CR3 anyway because the new process has a different CR3. The mode-switching code is only needed when the scheduler switches between processes. It's certainly not needed when allocating/freeing pages.
Brendan wrote: I meant the syscall itself, not the parameters to syscalls. I was mostly referring to the strange "patch the syscall in the general protection fault handler" thing you do.
This is defined in the syscall interface. It can be redefined if needed, and then the software is recompiled and the new interface is available. Although I doubt I will do it as I find the patch-interface quite adequate. Especially since the code can be patched in different ways depending on operating mode and available devices. On low-end CPUs the syscalls will typically be patched to call-gates, while on modern CPUs they will be patched to SYSENTER or SYSCALL instructions (depending on operating mode).

Brendan wrote: Surely the interface between the boot code and the kernel is the same in all 3 cases, and the kernel gets the same information passed to it in the same way, regardless what the boot code (or the boot loader or the firmware) was. If you can't change the native boot code and the other's need to be compatible, then you can't change the others either.
Not so. Additional parameters can be added for GRUB and UEFI. As an example, it is possible to add the memory pointer, which I discussed elsewhere.
Post Reply