hi, i'm new here and i'm interested in kernel design, i have some questions:
first, is the "hybrid" terminology a combination of monolithic and microkernel or is just marketing as some people says?
second, the linux kernel has some user-space stuff now like inotify, fuse, i/o drivers in user-space, etc. so is linux becoming a hybrid kernel?
can a full microkernel with ipc server message-passing outperform and be faster than a monolithic kernel in performance/development time?
thanks in advance
question about kernel architectures and linux
-
- Member
- Posts: 368
- Joined: Sun Sep 23, 2007 4:52 am
Re: question about kernel architectures and linux
It's mostly marketing. The point about microkernels is that very little code runs with the highest privilege level of the processor. When they implemented dynamically loadable modules in the monolithic kernel they call it hybrid. However, the loadable modules still runs in kernel space, defeating the safety idea behind microkernels. The "hybrid" kernel of OS X is supposedly structured like a microkernel, but actually all the code runs in kernel space.asdx wrote:first, is the "hybrid" terminology a combination of monolithic and microkernel or is just marketing as some people says?
(Of course it's handy to be able to load drivers as run-time, but it gives no additional safety compared to moving the modules to user-space.)
If a parts of the stuff that is normally inside the kernel is moved outside, then it can rightfully be called hybrid, but I have yet to see that.
Of course. A fast microkernel will be faster than a slow monolithic one.can a full microkernel with ipc server message-passing outperform and be faster than a monolithic kernel in performance/development time?
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
Hi,
Development time is a completely seperate issue. In my experience, in the early stages a microkernel is slower to develop because you need to spend more time designing (and documenting) the interfaces between seperate pieces. However, in the long term it can be faster to develop an OS with a microkernel because it's easier to debug device drivers - you get feedback from the extra protection and exception handlers, you might be able to use some extra kernel features (like I/O port, IRQ and IPC logging), and you might even be able to use normal debugging tools (like GDB). Depending on your specific situation there's possibly other benefits, like being able to run 32-bit device driver binaries on a 64-bit kernel (without rewriting or recompiling them as 64-bit code), being able to start and stop device drivers to test them without rebooting, etc.
IMHO, while performance and development time are important, they aren't the most important issue. The most important issue is trust: do you trust an anonymous "binary blob" of code from companies like NVidea, Sony, etc (and/or any random hacker that's pretending to be NVidea, Sony, etc) enough to give the binary blob full access to everything? If you don't trust anonymous binary blobs, what are you going to do about them?
A good microkernel uses the principle of least privilege to minimize the amount of damage an anonymous binary blob can do, so that the binary blob doesn't need to be trusted much.
A monolithic OS must trust the anonymous binary blobs. For open source monolithic OSs they tend to avoid binary blobs as much as possible because of the risks (or because of the lack of protection) - they try to force people into using open source instead. This is probably not a good idea because some companies have a lot of cash invested in their own intellectual property and are reluctant to make open source drivers, and some companies licence intellectual property from others and don't have a choice. In the end you could waste a massive amount of time reverse engineering something that a company would have been happy to provide as closed source (or under NDA, or under licence). Note: In addition to the technical reasons, there are also ideological and economic reasons for the "everything must be open source" mentality that I'd rather avoid.
For a closed source monolithic OS you're mostly screwed and need to rely on "brute force" methods - validation testing, digitally signed device drivers, anti-virus software, etc. IMHO none of these methods will work well for an OS with a small market share.
Cheers,
Brendan
Agreed. Hybrid is where you have some things running in kernel space and other things running in user space (e.g. disk and network drivers in kernel space, with video drivers in user space).Brynet-Inc wrote:Who does? Most people call that "Modular Monolithic"... not hybrid..Craze Frog wrote:When they implemented dynamically loadable modules in the monolithic kernel they call it hybrid.
Agreed, but the performance penalty can be significant - for the microkernel to perform better than a monolithic kernel you'd need an extremely well optimized micro-kernel and a very poorly implemented monolithic kernel.Craze Frog wrote:Of course. A fast microkernel will be faster than a slow monolithic one.asdx wrote:can a full microkernel with ipc server message-passing outperform and be faster than a monolithic kernel in performance/development time?
Development time is a completely seperate issue. In my experience, in the early stages a microkernel is slower to develop because you need to spend more time designing (and documenting) the interfaces between seperate pieces. However, in the long term it can be faster to develop an OS with a microkernel because it's easier to debug device drivers - you get feedback from the extra protection and exception handlers, you might be able to use some extra kernel features (like I/O port, IRQ and IPC logging), and you might even be able to use normal debugging tools (like GDB). Depending on your specific situation there's possibly other benefits, like being able to run 32-bit device driver binaries on a 64-bit kernel (without rewriting or recompiling them as 64-bit code), being able to start and stop device drivers to test them without rebooting, etc.
IMHO, while performance and development time are important, they aren't the most important issue. The most important issue is trust: do you trust an anonymous "binary blob" of code from companies like NVidea, Sony, etc (and/or any random hacker that's pretending to be NVidea, Sony, etc) enough to give the binary blob full access to everything? If you don't trust anonymous binary blobs, what are you going to do about them?
A good microkernel uses the principle of least privilege to minimize the amount of damage an anonymous binary blob can do, so that the binary blob doesn't need to be trusted much.
A monolithic OS must trust the anonymous binary blobs. For open source monolithic OSs they tend to avoid binary blobs as much as possible because of the risks (or because of the lack of protection) - they try to force people into using open source instead. This is probably not a good idea because some companies have a lot of cash invested in their own intellectual property and are reluctant to make open source drivers, and some companies licence intellectual property from others and don't have a choice. In the end you could waste a massive amount of time reverse engineering something that a company would have been happy to provide as closed source (or under NDA, or under licence). Note: In addition to the technical reasons, there are also ideological and economic reasons for the "everything must be open source" mentality that I'd rather avoid.
For a closed source monolithic OS you're mostly screwed and need to rely on "brute force" methods - validation testing, digitally signed device drivers, anti-virus software, etc. IMHO none of these methods will work well for an OS with a small market share.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Yes, in theory a microkernel is more secure, because an error in a driver / server cannot affect the kernel itself.
However, in practice this potential security is usually not achieved.
Why not?
Consider a microkernel with the disk driver / server in a user-space process. The server chokes on a bug and dies. Unless the system was designed to recover gracefully (restarting the driver / server, rolling back the operation underway, reattempting the operation - which might just fail all over again), the data is lost, and chances are good you now have a "stable" kernel running that cannot access the disk storage anymore. Phew, I was so lucky this is a microkernel system. Where's the reset button?
A microkernel system can be more secure / stable than a monolithic one. It is not safer / more stable per se.
However, in practice this potential security is usually not achieved.
Why not?
Consider a microkernel with the disk driver / server in a user-space process. The server chokes on a bug and dies. Unless the system was designed to recover gracefully (restarting the driver / server, rolling back the operation underway, reattempting the operation - which might just fail all over again), the data is lost, and chances are good you now have a "stable" kernel running that cannot access the disk storage anymore. Phew, I was so lucky this is a microkernel system. Where's the reset button?
A microkernel system can be more secure / stable than a monolithic one. It is not safer / more stable per se.
Every good solution is obvious once you've found it.
-
- Member
- Posts: 368
- Joined: Sun Sep 23, 2007 4:52 am
That is actually 100% true, but people don't use it like that. Unless I misunderstood totally then both MacOS X and Haiku are called hybrid and runs video drivers as well as the other drivers you mentioned in the kernel.Brendan wrote:Hi,
Agreed. Hybrid is where you have some things running in kernel space and other things running in user space (e.g. disk and network drivers in kernel space, with video drivers in user space).Brynet-Inc wrote:Who does? Most people call that "Modular Monolithic"... not hybrid..Craze Frog wrote:When they implemented dynamically loadable modules in the monolithic kernel they call it hybrid.