Are Microkernels really better?
Re: Are Microkernels really better?
Pure monolithic kernels are silly, because they essentially force all drivers to reside in the kernel, which means they require complex module loading systems and other mental gymnastics to create a reasonable, workable system.
Pure microkernels are silly, because they force everything to reside outside the kernel, which results in unneccessary overheads.
Pure exokernels are silly, because they are basically application hypervisors instead of an actual OS that provides services that users and applications want.
The best design for an OS is a hybrid approach that mitigates the weaknesses of each pure design. It's best to have a monolithic/exokernel hybrid that has all fundamental features (file system, scheduling, memory management, etc) inside the kernel, and all other features (such as specific device drivers) available to use directly in userspace.
The only drivers that need to reside in the kernel are ones that are required for the basic services that the kernel provides to the user and applications, which are mouse, keyboard, display, speaker, and network support. Outside of those, all other drivers can (and should) reside in userspace.
Pure microkernels are silly, because they force everything to reside outside the kernel, which results in unneccessary overheads.
Pure exokernels are silly, because they are basically application hypervisors instead of an actual OS that provides services that users and applications want.
The best design for an OS is a hybrid approach that mitigates the weaknesses of each pure design. It's best to have a monolithic/exokernel hybrid that has all fundamental features (file system, scheduling, memory management, etc) inside the kernel, and all other features (such as specific device drivers) available to use directly in userspace.
The only drivers that need to reside in the kernel are ones that are required for the basic services that the kernel provides to the user and applications, which are mouse, keyboard, display, speaker, and network support. Outside of those, all other drivers can (and should) reside in userspace.
Re: Are Microkernels really better?
Why? I have a monolithic kernel, and I don't have module loading. I have all drivers part of the kernel linked at build time. The only dynamic part is that their initialization function is put in a list with the other initialization functions by the linker. Module loading would require some kind of symbol export ABI, and then basically a linker inside the kernel. Linux has that, and they can keep it, but I want no part of it. If you want to configure the kernel for your hardware, you can elect to simply not build some of these drivers.Qbyte wrote:Pure monolithic kernels are silly, because they essentially force all drivers to reside in the kernel, which means they require complex module loading systems and other mental gymnastics to create a reasonable, workable system.
Weasel word alarm! Precisely how much overhead is necessary? Necessary for what? To attain the kind of modularity microkernels have, this is absolutely necessary.Qbyte wrote:Pure microkernels are silly, because they force everything to reside outside the kernel, which results in unneccessary overheads.
And then those services can be implemented on top of those kernels. I fail to see the problem.Qbyte wrote:Pure exokernels are silly, because they are basically application hypervisors instead of an actual OS that provides services that users and applications want.
Golden mean fallacy detected! "Hybrid" doesn't actually mean anything. It means you are not strictly on any of these paths, but then, those paths are barely defined at all. The cleanest definition of "microkernel" versus "monolithic kernel" I have ever heard came from Andy Tanenbaum, who used it to critique Linux. He defined a microkernel to be any kernel that has the VFS running outside the kernel binary. That is a hard criterion, you either have the VFS in the kernel or you don't, so there is no room for any hybrid. The only possible third category here is made up of exokernels, where there is not necessarily a VFS at all.Qbyte wrote:The best design for an OS is a hybrid approach that mitigates the weaknesses of each pure design.
Note that you can still implement any of these ideas badly. At work, I work with an operating system called OS-9. Now OS-9 is a microkernel. The kernel itself only cares about memory management, task management, and the first-level exception handlers. Half of the system calls are handled in it (namely those having to do with memory or task management), and the rest are only packaged and sent off to an external program called "ioman". There are many external programs that together form the operating system, with the interfaces defined only by convention, and this means that you can never rely on anything in an OS-9 system. The VFS is organized in drives, but what those drives are called and what they point to is way up in the air. There is a drive called /dd, for "default drive", and this is supposed to point to the main storage device, but on our machines it points to a RAM disk. You can also never know what the network cards are called. There is a standard way for a PIC driver to register itself, but all other drivers only have convention to go on to figure out how to register their interrupt handlers. And finally, all drivers have a bit set to make them run in system state. And if they crash? Then it is a hardware exception in system state, and that causes a system shutdown without sync. This system is very far removed from Minix. This system state thing also means there is no memory protection in these drivers. Because hey, who needs that? Except everyone who makes mistakes...
Carpe diem!
Re: Are Microkernels really better?
Wow. I am blown away. OS-9 is still alive? I used to play with Basic09 on OS-9 back in the 80s on a CoCo 3.nullplan wrote:At work, I work with an operating system called OS-9.
I am curious what you do for work... Are you using this on the 6809 (original OS-9) or the 68000 (OS-9/68K or OS-9000)? The more details you can provide the more excited the nerd in me will be.
To this day I still am fascinated by OS-9 and its design... But it's been a while since I've looked at it.
Lack of memory protection on the 6809 was too bad, but OS-9 was still quite usable. Well for playing games on a home computer anyways
Re: Are Microkernels really better?
By "module loading" I'm not referring strictly to the implementation used by Linux, but rather any method of extending the kernel by placing driver code into it. This is a defining characteristic of pure monolithic kernels, whose doctrine states that executing driver code is the responsibility of the kernel.nullplan wrote:Why? I have a monolithic kernel, and I don't have module loading. I have all drivers part of the kernel linked at build time. The only dynamic part is that their initialization function is put in a list with the other initialization functions by the linker. Module loading would require some kind of symbol export ABI, and then basically a linker inside the kernel. Linux has that, and they can keep it, but I want no part of it. If you want to configure the kernel for your hardware, you can elect to simply not build some of these drivers.
I'm saying that the alleged "modularity" of the microkernel file system philosophy results in no tangible benefit to the user or system. Therefore, it does nothing but introduce a performance and complexity penalty. Whatever fancy file system you conjure up, it will almost always be better and faster in the kernel than as a server. And since every non-trivial system needs a file system, you might as well implement it as part of the kernel.Weasel word alarm! Precisely how much overhead is necessary? Necessary for what? To attain the kind of modularity microkernels have, this is absolutely necessary.
The problem is that when you do that, you end up looking like a microkernel again, so we arrive back at the question of why not implement the important services in the kernel because of the major benefits that brings. Exokernels have some powerful ideas that are worth encorporating into an OS, but a pure one isn't really well suited for a general purpose OS.And then those services can be implemented on top of those kernels. I fail to see the problem.
That fallacy is when a designer settles on a middle ground thinking that it represents the best compromise between two opposing ideas, which is not at all what I'm advocating for. I'm saying to use an idea from one philosophy when it works best in a given area, and to use that from another when it doesn't. Free yourself from the constraints of a rigid ideology.Golden mean fallacy detected! "Hybrid" doesn't actually mean anything. It means you are not strictly on any of these paths, but then, those paths are barely defined at all. The cleanest definition of "microkernel" versus "monolithic kernel" I have ever heard came from Andy Tanenbaum, who used it to critique Linux. He defined a microkernel to be any kernel that has the VFS running outside the kernel binary. That is a hard criterion, you either have the VFS in the kernel or you don't, so there is no room for any hybrid. The only possible third category here is made up of exokernels, where there is not necessarily a VFS at all.
And "hybrid" does mean something, it means that a kernel has mixed and matched important details from different OS architectures so as to create an architecture with the benefits of each in their respective areas. An OS can still primarily be monolithic, micro, etc, but with important exceptions in certain areas.
I think Tanenbaum's definition is bad as it completely revolves around a single narrow aspect of an OS, while in reality there are other important factors at play. For example, if a kernel is entirely monolithic (all os services and drivers are in kernel space) except that it uses an external VFS, then that makes it a microkernel? That seems quite ludicrous to me, as those other services being in kernel space is diametrically opposed to the generally understood microkernel philosophy. It also doesn't really specify a solid definition of a monolithic kernel other than "not being a microkernel, which is an OS with a VFS outside the kernel" so it creates a false dichotomy of "there are only microkernels and non-microkernels".
-
- Member
- Posts: 595
- Joined: Mon Jul 05, 2010 4:15 pm
Re: Are Microkernels really better?
Yes, OS developers shouldn't pay so much attention to what nomenclature of kernel it is but what goals, purpose and requirements it is supposed to fill. As a hobbyist you might not have strict requirements or goals but then you might have an interesting architecture that you might want to pursue, which doesn't even have to be the fastest.Qbyte wrote:Pure monolithic kernels are silly, because they essentially force all drivers to reside in the kernel, which means they require complex module loading systems and other mental gymnastics to create a reasonable, workable system.
Pure microkernels are silly, because they force everything to reside outside the kernel, which results in unneccessary overheads.
Pure exokernels are silly, because they are basically application hypervisors instead of an actual OS that provides services that users and applications want.
The best design for an OS is a hybrid approach that mitigates the weaknesses of each pure design. It's best to have a monolithic/exokernel hybrid that has all fundamental features (file system, scheduling, memory management, etc) inside the kernel, and all other features (such as specific device drivers) available to use directly in userspace.
The only drivers that need to reside in the kernel are ones that are required for the basic services that the kernel provides to the user and applications, which are mouse, keyboard, display, speaker, and network support. Outside of those, all other drivers can (and should) reside in userspace.
Both Linux as Windows are hybrid kernels and they have changed over time. Typically, the trend has been that when the computers have become fast enough they have moved out functionality from the kernel to user space. The benefits have been for example stability, ability to restart the service/driver and the performance penalty was not bad enough for not doing it. For example much of the graphics functionality in Windows is today running in user space that previously was in the kernel for performance reasons.
During your OS development you are likely to encounter revelations like "I can run this in user space instead", and you will perhaps move it. Will you move it or will you keep it as it is because you are afraid some god from another dimension will come and punish you for violating some OS paradigm?
You should read about different kernel and their approaches and you will likely come up with your own based on that. With OS development the permutations are endless of what you can do.
Re: Are Microkernels really better?
Any sufficiently complex piece of software (and that definitely includes an OS) is going to need to be a mix of paradigms depending on what each component needs, otherwise things are going to suffer badly where it stops being suitable (also see: all the projects that try to shoehorn OOP onto even the parts that obviously don't benefit from objects at all and basically lead to glorified singletons). An actual OS will probably still overall lean towards one of the archetypes, though.
Also wait a minute:
Also wait a minute:
So you need to rebuild the kernel (and possibly reinstall it) if anything in the computer ever changes? (e.g. replaced the video card or whatever) That… doesn't sound very user friendly to me, unless you mean it's using already-built object binaries and that the rebuilding is part of driver install/removal (which requires a reboot for changes to take effect but that sounds like a good idea anyway).nullplan wrote:Why? I have a monolithic kernel, and I don't have module loading. I have all drivers part of the kernel linked at build time. The only dynamic part is that their initialization function is put in a list with the other initialization functions by the linker. Module loading would require some kind of symbol export ABI, and then basically a linker inside the kernel. Linux has that, and they can keep it, but I want no part of it. If you want to configure the kernel for your hardware, you can elect to simply not build some of these drivers.
Re: Are Microkernels really better?
It certainly is. Currently in the custody of MicroWare, here's their website: https://microware.com/kzinti wrote:Wow. I am blown away. OS-9 is still alive? I used to play with Basic09 on OS-9 back in the 80s on a CoCo 3.
Embedded development. I'd rather not tell the precise market, since it is a small one, and so that would quickly be identifying information.kzinti wrote:I am curious what you do for work...
My company started using OS-9000 on the M68k, but stopped using that about 10 years before I came into the company. OS-9 has by now been ported to every 32-bit architecture under the sun, and we are currently using it on PowerPC (a Freescale E300 system). Memory protection does exist, and is implemented in the form of identity mapping (OS-9 does not use virtual memory, which is a shame, since I've had memory requests fail due to fragmentation before), but that is only active in user mode, or "Problem State", as the PowerPC books call it (I do love that IBM identified the user as being a problem). In system state, paging is turned off entirely, and all system state processes have access to all memory. Well, almost all memory. There is a way to protect memory from system state corruption by setting things in the cache table, but I have no idea how that works.kzinti wrote:Are you using this on the 6809 (original OS-9) or the 68000 (OS-9/68K or OS-9000)? The more details you can provide the more excited the nerd in me will be.
OS-9 works with files called "memory modules". Such a module can be a program (user state or system state), or it can just be a data container. On startup, the bootloader loads a boot file into memory, but that boot file is simply a bunch of those modules concatenated together. The first must be the kernel, and another module "init", a data module, contains initialization data. Mainly stuff that would be in a device tree, if this were Linux. The kernel then starts a bunch of programs, all named in the init module, and those do things like initializing multitasking by setting up the DEC register, or setting up the IPIC. Finally, the "ioman" is started, and then another bunch of programs is started, ending with "sysgo". That program will start to look for a drive to boot on, and load the shell from there and execute the startup script.
All devices (network cards, serial lines, drives, everything) is described by "descriptors". Those are little modules naming a driver and containing parameters for the driver. In case of drives, they also name the driver for the file system. This design means it is easy to put in the code to mount the boot drive into the boot file, but horrible to work with hotplug devices. The Internet stack is provided entirely by out-of-kernel modules. Opening a socket means opening the device /tcp. This might be cool, but the TCP stack is incredibly simplistic and fails to even implement Nagle's algorithm
It's **** and Managarm does it better. Next generation, we are switching to Linux. But with our luck, we'll continue to have to support OS-9 until I retire.kzinti wrote:To this day I still am fascinated by OS-9 and its design... But it's been a while since I've looked at it.
Carpe diem!
Re: Are Microkernels really better?
Certainly so. However, I recently extended Linux with a driver for a device on the far end of an i2c link. The i2c driver was in the kernel, of course, but the Anybus driver was solely in user space. Super user, but user space nonetheless.Qbyte wrote:This is a defining characteristic of pure monolithic kernels, whose doctrine states that executing driver code is the responsibility of the kernel.
I'm sure Andy Tanenbaum would disagree.Here he is at Embedded World, showing off his big red button: https://www.youtube.com/watch?v=vlOsy0PZZycQbyte wrote:I'm saying that the alleged "modularity" of the microkernel file system philosophy results in no tangible benefit to the user or system.
The red button injects faults into the system and can cause some to crash. However, since the kernel is so small, it is unlikely to be hit, and all other modules can just be restarted. Not having the system crash at random for no reason (maybe due to a fault in RAM or something) is definitely a benefit to the user. And the system.
Faster yes, better is a value judgment. Microkernel developers sacrifice speed for safety, and they know that (or else, they fool themselves). But on the other hand, it isn't that much speed, and it is not like you need all that power for web browsing, anyway. If you give up 10% of your performance, but in return your system doesn't randomly crash anymore, I think most people would be willing to pay that price.Qbyte wrote:Whatever fancy file system you conjure up, it will almost always be better and faster in the kernel than as a server.
Point taken, however, a pure kernel is not a complete OS, anyway. And who said anything about general purpose? If you want general purpose, you stick to one of the dozen or so established OSes out there. No need for any new development. Exokernels are an idea to fill a niche where currently no OS exists.Qbyte wrote:The problem is that when you do that, you end up looking like a microkernel again, so we arrive back at the question of why not implement the important services in the kernel because of the major benefits that brings. Exokernels have some powerful ideas that are worth encorporating into an OS, but a pure one isn't really well suited for a general purpose OS.
True but at least it is deterministic. You can always tell where any OS falls with that definition. No need for wishy-washy "hybrids", or "My OS is X, but...".Qbyte wrote:I think Tanenbaum's definition is bad as it completely revolves around a single narrow aspect of an OS,
First of all, my OS is very friendly, it is just picky about its friend. It may have picked up some of the eccentricities of its creator, yes. Secondly, no, not necessarily. You can build the kernel with all drivers included. It's not like that would result in Gigabytes at the end. Not yet, anyway. But yes, if you have a kernel specially tailored for your hardware, and you change the hardware, you have to rebuild. Think Linux without modules. You can build a kernel with all modules included (built-in), but that will take forever to build/load/initialize. You can build one specially made for your hardware, but that will fail to boot on any other hardware. You can go somewhere in the middle, and that will boot on more configurations, but not all supported ones. My thought was that you are going to change your hardware less often than you are going to boot, so the added speed on boot-up is probably worth the hassle when switching configurations. It's basically the same reasoning why a lot of projects from suckless.org have their configuration built-in.Sik wrote:So you need to rebuild the kernel (and possibly reinstall it) if anything in the computer ever changes? (e.g. replaced the video card or whatever) That… doesn't sound very user friendly to me, unless you mean it's using already-built object binaries and that the rebuilding is part of driver install/removal (which requires a reboot for changes to take effect but that sounds like a good idea anyway).
Also, you act as if reinstalling the kernel was some major feat. The kernel is merely some ELF binary. The only difference is that it has a load address of 0xffff_ffff_8000_0000. Installing it is as easy as a single "cp", followed by reboot. Maybe I will add kexec functionality to make the reboot go quicker.
Carpe diem!
Re: Are Microkernels really better?
My opinion on this debate is that the important features should be inside the kernel, and the less important ones should be outside, which would make it an "almost microkernel". For example, I'd never put the memory manager outside the kernel because, in order to restart it in case of a crash, I'd still need to allocate memory. As for the VFS (which was mentioned earlier by @Qbyte), who would locate the VFS executable if the VFS had crashed? Could I fetch it from a centralized online drivers repository if networking is available? Possibly for VFS and for all actual filesystem and device drivers, though not for the memory manager as the problem is elsewhere. But, apart from the VFS (which is always necessary for the correct functioning of the OS), I wouldn't put tens of actual filesystem drivers inside the kernel. The user probably won't need most of them and, if they do, the necessary driver can be dynamically started (there is however the question of how to efficiently determine which one to start). The user for the most time will need only one or two "standard" filesystems for local disk storage, FAT for external USB and SD drives and for the UEFI System Partition if it exists, and ISO-9660 for CDs. As a matter of principle, I still wouldn't put these three or four inside the kernel.
As for device drivers, I think I'd put almost everything in userspace. However, things like timers and interrupts would be definitely inside the kernel. Things like PCI, xHCI, AHCI and other such generic controllers, well, I don't know. Maybe some of them would be inside the kernel. And, although I can't classify it as a driver, I'd have console text rendering to the framebuffer inside the kernel as well for reasons relating to early kernel logging (though the graphics driver itself will be external if more than VBE or GOP is supported).
Despite being a new member of these forums, I've been reading them for years, and also I have developed a minimal OS a couple of years ago and thus I know some parts of the OS theory. But I think I'll postpone restarting the OS, because I'd like to start with a more well-thought design and goals this time, and also because currently I have other priorities (including other projects). That said, I think my opinions are still valid (and not necessarily correct or wrong, there are no such adjectives here).
But to answer the original question, I think that more of a concern is bloated userspace software than context switches. But there are ways to bring context switches down to a minimum. For example, I recall reading about batch syscalls, an idea was very strongly advocated for by Brendan (yes, the known one). Basically, the userspace program creates an array of single syscalls for the kernel, and then the kernel does them all before returning to userspace. Of course, there is the question of what to do if only the first two out of five syscalls succeed and the third fails. Should it return and let the userspace program know that only two syscalls succeeded? I think yes, because the userspace program can revert the results of the first two syscalls using the same array as in the final clean-up, just with the starting index moved to 5 - 2 = 3 (assuming the things are cleaned up in reverse order and that each syscall used to allocate a resource has a respective syscall used to free the same resource, in other cases the clean-up logic might be a little more complex). I guess the same mechanism could be applied to driver commands too, possibly with more significant results and, additionally, because you'll need drivers developed specifically for your OS anyway, batch driver commands can be used while still adhering to POSIX (if you consider it a must, myself I'm undecided at this point)
However, I also recall that Brendan himself posted about Red Hat patenting the method years after Brendan suggested it here for the first time (US patent 9038075). The patent is absurd because there has been "prior act" in these forums, and very possibly elsewhere. But to quote Brendan himself:
As for device drivers, I think I'd put almost everything in userspace. However, things like timers and interrupts would be definitely inside the kernel. Things like PCI, xHCI, AHCI and other such generic controllers, well, I don't know. Maybe some of them would be inside the kernel. And, although I can't classify it as a driver, I'd have console text rendering to the framebuffer inside the kernel as well for reasons relating to early kernel logging (though the graphics driver itself will be external if more than VBE or GOP is supported).
Despite being a new member of these forums, I've been reading them for years, and also I have developed a minimal OS a couple of years ago and thus I know some parts of the OS theory. But I think I'll postpone restarting the OS, because I'd like to start with a more well-thought design and goals this time, and also because currently I have other priorities (including other projects). That said, I think my opinions are still valid (and not necessarily correct or wrong, there are no such adjectives here).
But to answer the original question, I think that more of a concern is bloated userspace software than context switches. But there are ways to bring context switches down to a minimum. For example, I recall reading about batch syscalls, an idea was very strongly advocated for by Brendan (yes, the known one). Basically, the userspace program creates an array of single syscalls for the kernel, and then the kernel does them all before returning to userspace. Of course, there is the question of what to do if only the first two out of five syscalls succeed and the third fails. Should it return and let the userspace program know that only two syscalls succeeded? I think yes, because the userspace program can revert the results of the first two syscalls using the same array as in the final clean-up, just with the starting index moved to 5 - 2 = 3 (assuming the things are cleaned up in reverse order and that each syscall used to allocate a resource has a respective syscall used to free the same resource, in other cases the clean-up logic might be a little more complex). I guess the same mechanism could be applied to driver commands too, possibly with more significant results and, additionally, because you'll need drivers developed specifically for your OS anyway, batch driver commands can be used while still adhering to POSIX (if you consider it a must, myself I'm undecided at this point)
However, I also recall that Brendan himself posted about Red Hat patenting the method years after Brendan suggested it here for the first time (US patent 9038075). The patent is absurd because there has been "prior act" in these forums, and very possibly elsewhere. But to quote Brendan himself:
Note: I am not claiming to be the only person to have thought of this idea; and (because it really is quite obvious) multiple people could have thought of it before I did. I'm mostly only warning people that following my suggestion might not be wise in hindsight.
Interestingly, HUAWEI patented the exact same thing in 2017 (US patent 10379906). But maybe both patents could be challenged if enough prior act is found? Maybe there isn't enough of it here, because Brendan had the ability to edit everyone's posts (including of course his own, and also what I'm writing at this point is an edit to what I posted previously). For example in usenet, can the posts be retroactively edited? AFAIK no, thus that may possibly constitute "prior act". Same for IRC, under the condition there are reliable logs (as opposed to fabricated ones, created solely to challenge the patents).The general concept of sending a list of steps (rather than sending one step at a time) probably dates back to early hand-writing and cooking recipes (if not before); and would've been very obvious (sending "10 steps to bake a cake" as 10 separate pieces of paper would be very counter-intuitive) even to people that weren't "trained in the art of programming" back then.
Last edited by testjz on Mon Aug 31, 2020 2:51 pm, edited 2 times in total.
Re: Are Microkernels really better?
To me, there 5 main kernel types:
Pure monolithic, which is where everything - kernel, scheduler, and drivers - are one giant binary. Although this would be easy to develop and have no third party code running in kernel mode, it is unflexible. To install a driver requires recompiling the whole kernel! Or at least re linking.
Modular monolithic, which is where kernel and drivers run in kernel mode, but drivers are stored in modules and are loaded and linked into the kernel at runtime. This is more flexible, but it has third party code running in kernel mode generally, so it is less secure. Linux and *BSD are this.
Monolithic hybrid, which is where kernel and drivers run in kernel mode, but the kernel is structured like a microkernel. May have the ability to run user mode drivers, and have more things in user mode, but most drivers will run in kernel mode. Windows NT, ReactOS, and XNU (macOS) are this. Arguably, this is just marketing as Linus Torvalds said.
Micro hybrid, which is where non third party drivers (bus controllers, VFS, driver management, etc.) run in kernel mode, while third party drivers run in user mode. This would be mor secure, as third party code runs out of the kernel, but still isn't as stable as a pure microkernel, as a fair amount of stuff runs in kernel mode. Can't think of any OSes like this.
A microkernel, which is where most things run in user mode, like the VFS, bus controllers, and driver management along with third party stuff. In some cases, memory management and process management would also go in user mode (I think running process management and memory management in user mode is an overkill). More stable and more secure, but slower. Minix, Hurd, Managarm, and QNX are this.
Of course, there is also exokernel, megalithic, and others, but those are really the five main types. I think that high performances systems should use monolithic hybrid, and high security systems should use microkernel. Micro hybrid needs a bit more research.
Pure monolithic, which is where everything - kernel, scheduler, and drivers - are one giant binary. Although this would be easy to develop and have no third party code running in kernel mode, it is unflexible. To install a driver requires recompiling the whole kernel! Or at least re linking.
Modular monolithic, which is where kernel and drivers run in kernel mode, but drivers are stored in modules and are loaded and linked into the kernel at runtime. This is more flexible, but it has third party code running in kernel mode generally, so it is less secure. Linux and *BSD are this.
Monolithic hybrid, which is where kernel and drivers run in kernel mode, but the kernel is structured like a microkernel. May have the ability to run user mode drivers, and have more things in user mode, but most drivers will run in kernel mode. Windows NT, ReactOS, and XNU (macOS) are this. Arguably, this is just marketing as Linus Torvalds said.
Micro hybrid, which is where non third party drivers (bus controllers, VFS, driver management, etc.) run in kernel mode, while third party drivers run in user mode. This would be mor secure, as third party code runs out of the kernel, but still isn't as stable as a pure microkernel, as a fair amount of stuff runs in kernel mode. Can't think of any OSes like this.
A microkernel, which is where most things run in user mode, like the VFS, bus controllers, and driver management along with third party stuff. In some cases, memory management and process management would also go in user mode (I think running process management and memory management in user mode is an overkill). More stable and more secure, but slower. Minix, Hurd, Managarm, and QNX are this.
Of course, there is also exokernel, megalithic, and others, but those are really the five main types. I think that high performances systems should use monolithic hybrid, and high security systems should use microkernel. Micro hybrid needs a bit more research.
-
- Member
- Posts: 510
- Joined: Wed Mar 09, 2011 3:55 am
Re: Are Microkernels really better?
That's fine if you don't plan on supporting a lot of hardware, or if you aren't distributing kernel binaries and expect every user to build their own kernel.nullplan wrote:Why? I have a monolithic kernel, and I don't have module loading. I have all drivers part of the kernel linked at build time. The only dynamic part is that their initialization function is put in a list with the other initialization functions by the linker. Module loading would require some kind of symbol export ABI, and then basically a linker inside the kernel. Linux has that, and they can keep it, but I want no part of it. If you want to configure the kernel for your hardware, you can elect to simply not build some of these drivers.
But if I run:
Code: Select all
$ du -BM /lib/modules/`uname -r`/kernel/drivers
Re: Are Microkernels really better?
I think the design I suggested could be classified as such if I decide to put inside the kernel the drivers for generic controllers (PCI and newer, xHCI and older, AHCI, etc).nexos wrote:Micro hybrid, which is where non third party drivers (bus controllers, VFS, driver management, etc.) run in kernel mode, while third party drivers run in user mode. This would be mor secure, as third party code runs out of the kernel, but still isn't as stable as a pure microkernel, as a fair amount of stuff runs in kernel mode. Can't think of any OSes like this.
You could always during OS installation have the option of building the entire system from source and optimizing it for the current CPU, buses, and peripherals configuration. But then you have a problem if you downgrade the CPU (some instructions will not be understood), or modify the buses and/or peripherals (the kernel will not have drivers for the new ones). I think you would need at least loadable modules for flexibility in hardware configurations. Dynamically loadable modules would be better but, if it is a basic device, the OS may still fail to boot before being able to dynamically load modules. The same applies to userspace drivers.linguofreak wrote:That's fine if you don't plan on supporting a lot of hardware, or if you aren't distributing kernel binaries and expect every user to build their own kernel.nullplan wrote:Why? I have a monolithic kernel, and I don't have module loading. I have all drivers part of the kernel linked at build time. The only dynamic part is that their initialization function is put in a list with the other initialization functions by the linker. Module loading would require some kind of symbol export ABI, and then basically a linker inside the kernel. Linux has that, and they can keep it, but I want no part of it. If you want to configure the kernel for your hardware, you can elect to simply not build some of these drivers.
But if I run:
on Ubuntu, the final total I get is 173 megabytes. If you want to support a broad range of hardware, and will be distributing a default kernel to all users that supports all of that hardware, your kernel size is going to be on that order of magnitude. Heck, even if you ignore the modules and just take vmlinuz itself, a modern Linux kernel weighs about 10 MB.Code: Select all
$ du -BM /lib/modules/`uname -r`/kernel/drivers
The user should do this: Put the module for the new hardware in the initial ramdisk but don't remove the old one, replace the hardware, and try to boot to make sure the module is correctly loaded. If it is, remove the module for the old hardware and you are OK. If not, put the old hardware back in, boot, and try again with the modules.
Re: Are Microkernels really better?
"Monolithic" implies that the kernel is a SINGLE image, thus - "monolithic", if that qualifier term is supposed to mean something. Neither Windows, nor ReactOS and, I believe, OS X as well, aren't single image kernels. Windows literally constructs itself at load time from numerous modules. and even unloads portions of them after and loads new ones at run time. It's definitely a modular kernel. there is no just one image. Linux on the other hand IS monolithic and IS NOT modular, since its "modules" are just object files, left to be linked into a SINGLE kernel image later. it is NOT modular, rather, delayed linked monolithic.nexos wrote: Modular monolithic, which is where kernel and drivers run in kernel mode, but drivers are stored in modules and are loaded and linked into the kernel at runtime. This is more flexible, but it has third party code running in kernel mode generally, so it is less secure. Linux and *BSD are this.
Monolithic hybrid, which is where kernel and drivers run in kernel mode, but the kernel is structured like a microkernel. May have the ability to run user mode drivers, and have more things in user mode, but most drivers will run in kernel mode. Windows NT, ReactOS, and XNU (macOS) are this. Arguably, this is just marketing as Linus Torvalds said.
Torvalds maybe "said" that because he sh1ts everything what is not like his UNIX clone and maybe for linux not looking like a retarted clone, not bringing anything new. what that "just marketing" means? it's BS, because those are two different organization strategies and they do have visible consequenses for kernels that employ them.
the main idea of microkernels is transferring drivers into user mode processes and thus having address protection from each other and from the kernel mode microkernel, it implies modularity, but IN NO WAY it makes every modular based kernels, that don't make drivers user processes, "monolithic". that is just stupid idea of UNIX zealots, failing to realize and admit, that the REAL modular design is better in many ways.
On topic, I won't bring anything new. It's a so useless debate, that only rant of lamers about Secure Boot could win over it by its uselessness. anyone just makes their hobby the way as they want, that's all "pros and cons". I am not a fan of microkernels, I don't see any advantages of them. I like modularity, but, as said above, it's NOT tight to "microkernelness". again, nothing new - say, your super "protected" user space driver for storage stack fails its operation, how this fail is "safer" and less "fatal", than if the same happens with kernel mode driver of the same functionality in a hybrid kernel?
-
- Member
- Posts: 426
- Joined: Tue Apr 03, 2018 2:44 am
Re: Are Microkernels really better?
This sounds about right. I've agonised about this in my own design, but I think some things are just best in the kernel (paging, VFS) and some things can be moved to user space as necessary (specific file systems, most stuff can be satisfied from the kernel VFS cache anyway).nexos wrote:To me, there 5 main kernel types:
Pure monolithic, ....
Modular monolithic, ...
Monolithic hybrid, which is where kernel and drivers run in kernel mode, but the kernel is structured like a microkernel. May have the ability to run user mode drivers, and have more things in user mode, but most drivers will run in kernel mode. Windows NT, ReactOS, and XNU (macOS) are this. Arguably, this is just marketing as Linus Torvalds said.
Device drivers, is the bit I'm torn on at the moment. Some drivers can probably be happily in user space, basically anything that is fire and forget (such as GFX.)
But more request/response devices might start being bottlenecked by their synchronous nature, and switching contexts multiple times per request starts adding significant overhead. Perhaps that can be mitigated by batching if possible (such as submitting multiple read requests where possible.)
I've just been doing some interrupt work, basically making interrupts work with shared level triggered interrupts, and I can't see how such things are meant to work in a microkernel. Sure, you can model interrupts as user messages, but while a shared interrupt is being processed, you have to block further level triggered interrupts from occurring until the current interrupt is done, and if you're handing off messages to potentially fault tolerant user space drivers, an interrupts might be disabled for however long it takes to spot a fault, and restart the driver, which could be on the order of several seconds (an eternity.)
I think this is where my kernel will end up. Some core drivers will run in kernel mode, basically things that are performance critical, or need to handle potentially shared interrupts as highlighted above.nexos wrote: Micro hybrid, which is where non third party drivers (bus controllers, VFS, driver management, etc.) run in kernel mode, while third party drivers run in user mode. This would be mor secure, as third party code runs out of the kernel, but still isn't as stable as a pure microkernel, as a fair amount of stuff runs in kernel mode. Can't think of any OSes like this.
So, in kernel space, I intend to keep:
- VFS, caching, paging
- Some core drivers - console (keyboard, framebuffer, so the user can get control), USB, AHCI, SCSI host controllers.
- Some bootstrap filesystem support. Currently USTAR and FAT, perhaps something like squashfs in the future.
Then other drivers can move to user space:
- Device mappers (partitioning schemes, RAID, volume management.)
- Non-bootstrap filesystems.
- GFX
- USB devices
- Line disciplines
- Network stack.
But, I'd probably lump this and monolithic-hybrid together, as by your definitions, the only difference is the scale of user drivers, which is more a policy decision than a mechanism one. For me, the discriminator between kernel/user mode might well end up being interrupt requirements.
nexos wrote: A microkernel, ....
Re: Are Microkernels really better?
There is really no difference between delayed linked monolithic and modular. Linux is definitely modular. Not as modular as Windows NT, but still modular.zaval wrote:Windows literally constructs itself at load time from numerous modules. and even unloads portions of them after and loads new ones at run time. It's definitely a modular kernel. there is no just one image. Linux on the other hand IS monolithic and IS NOT modular, since its "modules" are just object files, left to be linked into a SINGLE kernel image later. it is NOT modular, rather, delayed linked monolithic.