Microkernels are easier to build than Monolithic kernels

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Microkernels are easier to build than Monolithic kernels

Post by eekee »

It seems I've been dreaming of the best of both worlds: restartable services and everything thoroughly tested. It's a dream. :) It might be achievable in a single-developer or small-team OS. I chose interactive Forth partly because it makes simple testing so easy, but the other day I caught myself not testing a definition because it wasn't easy to test interactively.

nexos wrote:The idea that microkernels are "slow" is based off of first-generation microkernels (MINIX and Mach mainly). MINIX is a teaching OS, not really production quality. Mach is a failure-by-design in a lot of ways. Don't get me wrong, Mach has clever ideas but is implemented very poorly. This was due to Mach's poor spatial locality and large, complicated IPC mechanism
Smol correction: MINIX version 3 is not intended to be a teaching OS. It's aiming for full production quality with all services restartable. I last heard about it years ago when they'd recently had to accept that they needed the MMU, and seemed to have a lot of work to do to adapt. That may have been 8-10 years ago. It can't have been released since then or I'm sure we'd all know, so it doesn't seem to be progressing much faster than GNU Hurd.

Mach has poor performance? That's interesting to me, because I've recently been comparing Mach, in the form of OS X 10.5, with Windows 10 & 11. Different generations, I know, and it's only a subjective comparison, but there's this one use case where old OS X was clearly faster than new Windows. It's Eagle Mode, a zoomable UI in which the file manager reads the contents of files to display them as you zoom in and out through the directory structure. If I'm not mistaken, it uses POSIX-ish open/close/read/write; I don't see any signs of mmap. It was easily usable as my normal file manager in OS X (and Linux and FreeBSD too), but in Win10 and Win11, it stutters so much as to be uncomfortable in Win11, barely usable in Win10. (Or perhaps: uncomfortable on exfat, barely usable on NTFS.) I think it's a known fact amongst Windows programmers that "sequential IO" (as they call it) is unoptimized, but Apple must have done something interesting to Mach to put it in the same league as Linux in this use case. :)

(I'm not praising Eagle Mode here, it should have been designed with more asynchronous internal interfaces. It's just interesting to see what it reveals.)
nexos wrote:Second-generation microkernels (L4) are much faster. E.g, a round trip syscall on Mach took 500 us, on L4, 16 us. I believe a Unix system based on L4's design principles could potentially take the OS world by storm. By implementing small, fast messaging using many optimizations, microkernels could get somewhat close to their monolithic counterparts.
This could be very nice, but the phrase "many optimizations" raises a warning flag in my mind. It suggests complex or counterintuitive code, perhaps with difficult debugging for OS developers.

But speaking of 1st-generation microkernel Unixes, what about QNX? It was renowned for being quick (and very small). Perhaps they invented optimizations and didn't publish.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
thewrongchristian
Member
Member
Posts: 422
Joined: Tue Apr 03, 2018 2:44 am

Re: Microkernels are easier to build than Monolithic kernels

Post by thewrongchristian »

eekee wrote:Mach has poor performance? That's interesting to me, because I've recently been comparing Mach, in the form of OS X 10.5, with Windows 10 & 11. Different generations, I know, and it's only a subjective comparison, but there's this one use case where old OS X was clearly faster than new Windows.
Mach, as used in NextSTEP (as well as OSF/1), was based on Mach 2.5, which was basically a hybrid kernel with the BSD kernel ported to Mach primitives. So Mach implemented the threading, MM etc, and the BSD syscall interface was implemented in kernel mode in terms of those Mach primitives, instead of the crufty old pdp-11/VAX like primitives used previously.

So it's not so far removed from Windows NT really, which isn't surprising, as Rick Rashid worked on both.
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Microkernels are easier to build than Monolithic kernels

Post by eekee »

thewrongchristian wrote:
eekee wrote:Mach has poor performance? That's interesting to me, because I've recently been comparing Mach, in the form of OS X 10.5, with Windows 10 & 11. Different generations, I know, and it's only a subjective comparison, but there's this one use case where old OS X was clearly faster than new Windows.
Mach, as used in NextSTEP (as well as OSF/1), was based on Mach 2.5, which was basically a hybrid kernel with the BSD kernel ported to Mach primitives. So Mach implemented the threading, MM etc, and the BSD syscall interface was implemented in kernel mode in terms of those Mach primitives, instead of the crufty old pdp-11/VAX like primitives used previously.

So it's not so far removed from Windows NT really, which isn't surprising, as Rick Rashid worked on both.
Interesting. Thanks! That is a hybrid.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
User avatar
AndrewAPrice
Member
Member
Posts: 2298
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Microkernels are easier to build than Monolithic kernels

Post by AndrewAPrice »

I've been thinking about porting my OS to another architecture (ARM to my run on my Raspberry Pi) and because the kernel is a microkernel and limited in scope, it seems the simplest and cleanest option is just to rewrite the kernel from scratch for each architecture. Then, for my system library the only thing I'd need to change are adapting the system calls for each architecture, which I could consolidate this code into one place to make it obvious what needs to be ported.
My OS is Perception.
nullplan
Member
Member
Posts: 1760
Joined: Wed Aug 30, 2017 8:24 am

Re: Microkernels are easier to build than Monolithic kernels

Post by nullplan »

AndrewAPrice wrote:I've been thinking about porting my OS to another architecture (ARM to my run on my Raspberry Pi) and because the kernel is a microkernel and limited in scope, it seems the simplest and cleanest option is just to rewrite the kernel from scratch for each architecture.
Wat? But the kernel is supposed to present the same interface to userspace. Even a microkernel needs to do that. And memory management (the one core component that must be in the kernel) isn't all that different between architectures.

Normally, kernels have architecture-specific abstractions to let the core kernel, the main logic, remain unchanged. Then porting to a new architecture is just a matter of finding out all the differences and building abstractions for them. This of course requires the kernel author to separate arch-specific and arch-independent parts from the word go. So you saying that rewriting the kernel is simpler tells me you didn't do that, and will continue to not do that, but instead will write everything again.

I think it would be better to expend the effort of separating the arch-specific parts out from the arch-independent ones. Because your way would leave you with two independent code bases indefinitely, increasing maintenance burden. If you port to more architectures, you get even more code bases. You will go crazy fixing the same bugs n times over and over again.
Carpe diem!
User avatar
AndrewAPrice
Member
Member
Posts: 2298
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Microkernels are easier to build than Monolithic kernels

Post by AndrewAPrice »

nullplan wrote:Then porting to a new architecture is just a matter of finding out all the differences and building abstractions for them. This of course requires the kernel author to separate arch-specific and arch-independent parts from the word go. So you saying that rewriting the kernel is simpler tells me you didn't do that, and will continue to not do that, but instead will write everything again.
I wasn't honest about the "from scratch" part. There will be reuse but instead of "take the existing kernel and figure out what can be abstracted" I was thinking of "rewrite for a new architecture but figure out what can be reused and move it into shared code."

The kernel is an implementation of the system calls. Each port needs to implement those system calls. The system call interface will vary slightly different between architectures (the opcode and available registers) but as long as we implement the same user space C interface to the system calls, all user code (except those requiring assembly) that compile against these syscall stubs will then be portable.

My micro-"kernel" itself is small that it feels like it's just an abstraction on top of the kernel to provide for messaging and multitasking. My first hunch is that maybe 20% of my code isn't arch-specific, but maybe once I undertake this task it ends up being 80%.
My OS is Perception.
Post Reply