nexos wrote:nullplan wrote:(e.g. added security can be defeated with DMA unless an IOMMU is employed).
I think you answered your own problem
One claim of microkernels was always that they are better secured against faulty or malicious drivers (if it is faulty enough, the distinction starts to disappear). And now the solution is supposed to be another piece of hardware. Forgive my skepticism. IOMMU only helps on chipsets that have them, and only if you have a driver/infrastructure for them in your kernel, and only within its page granularity. And most IOMMUs I am aware of have pages of 64kB.
nexos wrote:nullplan wrote:Microkernels force you to turn any driver interface into a message stream, and to serialize all requests down that stream and deserialize the answers (and the driver is running that same interface in reverse). All of that packaging is unnecessary.
You could do what I am going to do and have the kernel work with an unserialized form. Then you won't have any "extra packaging", and all copying can be deferred with CoW (unless [obviously] the buffer is small enough that CoW would be counterproductive).
What are you talking about? One example of what I was talking about: An application wants to read 20 bytes from a file. It issues the call
In a monolithic kernel, that call works its way down the stack, through the FD abstraction layer, the VFS, the file system, and the volume drivers, and finally ends up in the page cache for the drive, where it results in a request to the page cache to load page 12345 from the drive and notify the caller. Depending on implementation, that may be the only time the request has to be serialized in any form. Or if you skip the I/O scheduler then not even then; you just update the page cache right out of the system call context. Once the page is updated, the appropriate 20 bytes are copied to the user.
In a microkernel, the kernel sees the above call. Since read() is an I/O operation, it first has to package that request up and send it to the I/O manager. Which unpacks it and sends it to the VFS. Where it results in a request to the appropriate FS driver. Where it results in at least one request for a page from the appropriate volume driver. Where the request are changed only slightly before being sent to the page cache. It is not always the same request, but they are requests that result from the original desire of the application to read 20 bytes from a file.
Each time, a message has to be constructed and then sent to another process, because none of these things are in the same process, and nor are they in the kernel. All of the things that in a monolithic kernel are just simple or indirect function calls become remote procedure calls in a microkernel, and each time you have to turn the request into some kind of structure and send it down a message stream.
nexos wrote:nullplan wrote:n processes access the same hardware at the same time (serialized by a lock if need be).
You probably shouldn't carry on two different conversations with hardware at once. That is destined to not fare very well.
Depends on the hardware. Obviously, some hardware, like bulk-only USB sticks, only support one transaction at a time. Others, like UAS drives, support multiple transactions at a time and only require serialization at enqueue and dequeue time.
nexos wrote:The m requesters communicating with n server threads "problem" in microkernels isn't a problem when you really think about it. In a monolithic kernel, two threads attempting to read from a file at the same time would only be a problem if they were running on the same CPU. At that, it still wouldn't be a big problem really as only one thread can run at a given moment on a single CPU no matter what. Once you prioritize requests, the problem disappears.
I have no idea what you are on about. There are many problems with m:n communication channels. Balance is only the first issue (is it even an issue? Shouldn't you want to keep the number of active threads low in the name of saving the planet?). How do you ensure non-starvation? Is fairness a goal? If so, how do you achieve it?
In a monolithic kernel, all that is really needed is a good scheduler and a good lock implementation (and optionally, a good I/O scheduler). In a microkernel, you suddenly also need a good m:n message queue implementation.
nexos wrote:nullplan wrote:It also forces you to repeat a lot of code, because code cannot be shared between drivers (except possibly in shared libraries, but that is a whole other can of worms).
If you don't want shared libs, use static ones.
My opinions on shared libraries are well publicized elsewhere. Besides, they wouldn't help if you actually took advantage of the other benefit of microkernels, that you can build your drivers in different languages, because I guarantee you that Rust and Go and Python have their own thread-pool implementations and do not share code with libcxx.
Indeed, busysbox is an attempt at shared libraries at their simplest: The idea is to put all of the code in one binary and next to nothing into its data section (that's why their global variables are dynamically allocated). The result is that all processes running busybox share text because it is the same binary. A side effect is that the resulting binary is smaller than all implemented programs in separate files would be because of synergy effects. Basically the same thing we have with microkernels and monolithic kernels. Now you attempt to recapture the lost synergy effects in a library
nexos wrote:And besides, you could still write a driver in Python in a monolithic system. You'd just need an embedded interpreter in the kernel, or you JIT it in the build process. Look at Lua in NetBSD.
That thought did occur to me after sending my earlier reply. You can also write single drivers in a different compiled language, so long as that language can be made to interface with the main kernel language. Because linking object files together is exactly what a linker does. But the main thrust of the point, in my opinion, is less that you have a free choice of language, and more that you can write the code in the way of a userspace program, and take advantage of all the features that offers. Or to say it more laconically: Hosted environment, not freestanding.