Exterminate All Operating System Abstractions

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
mikegonta
Member
Member
Posts: 229
Joined: Thu May 19, 2011 5:13 am
Contact:

Exterminate All Operating System Abstractions

Post by mikegonta »

hotos-exokernel wrote:The defining tragedy of the operating systems community has been the definition of an operating system as software that both multiplexes
and abstracts physical resources.
Poor reliability - Poor adaptability - Poor performance - Poor flexibility
hotos-exokernel wrote:We contend that the solution to all of these difficulties is straightforward: eliminate operating system abstractions.
hotos-exokernel wrote:... our proposed structure solves the traditional problems of reliability, efficiency, and extensibility; these points have at their core the
simple principle that the most efficient, reliable, and extensible OS abstraction is the one that is not there.
For those without tl;dr syndrome the complete position paper is here.

References
Mike Gonta
look and see - many look but few see

https://mikegonta.com
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Exterminate All Operating System Abstractions

Post by Brendan »

Hi,

Yeah, this is all extremely stupid. Instead of getting rid of abstractions, an exokernel has the exact same abstractions implemented in a library (that no sane programmer ever bothers to change, and no sane person should ever be expected to change), plus an additional layer of abstractions in the kernel to "securely multiplex".

The end result is twice as many abstractions (and twice as many problems caused by abstractions), with no practical benefits whatsoever (and some vague rhetoric involving theoretical benefits that never occur in practice).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
~
Member
Member
Posts: 1227
Joined: Tue Mar 06, 2007 11:17 am
Libera.chat IRC: ArcheFire

Re: Exterminate All Operating System Abstractions

Post by ~ »

Software abstractions are useful in moderate percentages, but they are always shameful when they are used to mask differences in hardware that should just not exist. For example, why use arbitrary I/O ports for sound, video, disk, network, USB, printers/scanners/cameras, clocks and any other devices instead of scientifically assigning them fixed resources and I/O ports as if they were actual device command numbers (even if there are multiple, then make them selectable or assign a big range of possible ports)? Why not assign them the same ports to IDE and SATA devices in addition to other fixed reserved ones?

What must be completely obliterated is non-standard access to hardware and its programming. All devices of one same kind should always be programmed and accessed in the same way in any conceivable machine, once a standard is defined for them (curiously there have been excellent standards almost from the start, like ATA/ATAPI devices, the Sound Blaster 16/AWE32/AWE64 and AC'97 for sound). Only a few devices need a little more standardization to make drivers less necessary, like high-resolution 3D-accelerated video, and image input/output devices like printers/scanners, as well as TV cards.

But there's still a lot to fix from the hardware. All devices of one same kind should start having the exact same standard features, no matter their brand (for example all new sound cards should fully implement Sound Blaster AWE64 which is compatible with SB16, a fully capable MIDI synthesizer, and AC'97).

Hardware should also initialize itself to a fully usable state by itself and be as simple to use as to work under any DOS version and other OSes. At least such hardware can be modeled in software under existing OSes so standardizing hardware usage to an absolutely common and unchanging denominator is one of the most necessary and lowest abstraction layers given that all hardware device types were never 100% uniform from the very beginning.
YouTube:
http://youtube.com/@AltComp126

My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... ip?viasf=1
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: Exterminate All Operating System Abstractions

Post by SpyderTL »

~ wrote:All devices of one same kind should start having the exact same standard features, no matter their brand (for example all new sound cards should fully implement Sound Blaster AWE64 which is compatible with SB16, a fully capable MIDI synthesizer, and AC'97).
The SoundBlaster AWE64 is not AC'97 compliant, as far as I can tell. And AC'97 doesn't define the method for communicating with an audio device. (I found this out the hard way.)

The problem with defining an interface at the hardware level is that it restricts future development. If the standard defines a 32-bit register that holds a 32-bit memory address, that device will have issues running on a 64-bit architecture, for example.

This is why all interfaces are now defined in software (drivers) -- because software can be updated a lot easier than hardware. It allows hardware manufactures to, say, offer a DirectX 11 driver for a video card that was manufactured when DirectX 9 was the latest version available.

Never mind the fact that getting hardware manufacturers to agree on a single standard is like getting all OS Developers to agree on one OS design... :wink:
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
~
Member
Member
Posts: 1227
Joined: Tue Mar 06, 2007 11:17 am
Libera.chat IRC: ArcheFire

Re: Exterminate All Operating System Abstractions

Post by ~ »

There are several modern cards that are capable to receive Sound Blaster commands in varying degrees, in parallel to their native mode, but of course someone (some OS driver) has to initialize the board.

They could perfectly be implemented as CPU instruction set extensions where the I/O port numbers and memory regions would be used in place of opcode numbers and their data (with standard hardware interface). After so many variations of devices that in the end do the same, it's really necessary to have a fully standard machine which base will never change except for expansive milestones (mimicking the process seen between the 8086, 286, 386, 486 and Pentium with the maximum if not full backward compatibility).

Until then, PCs will need to see a hardware standardization milestone for the most modern hardware. It's something that hasn't truly happened since the earlier Pentiums, apparently both to evade "wild" programmers and due to the vertiginous speed of the new technologies development and implementation.

For instance, Intel and AMD could define the opcodes and PC ports to drive sound, video, USB, TV, etc...

As for data size, you could set the device mode to run in 8, 16, 32, 64 or more bits as well as other features.

It looks like it would be the more economic option. The CPU has proven a convenient framework and it seems that it would be the more reasonable to implement the functions of other devices as subsets of x86 instructions. Probably x86 CPUs and chipsets would do that in the future.
YouTube:
http://youtube.com/@AltComp126

My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... ip?viasf=1
User avatar
~
Member
Member
Posts: 1227
Joined: Tue Mar 06, 2007 11:17 am
Libera.chat IRC: ArcheFire

Re: Exterminate All Operating System Abstractions

Post by ~ »

SpyderTL wrote:
~ wrote:Never mind the fact that getting hardware manufacturers to agree on a single standard is like getting all OS Developers to agree on one OS design... :wink:
When different brands cannot agree on a single design... the root Computer Science layers of the industry should scientifically come up with structure and value definitions for any and all devices and software, which should be as optimal and based on laws as much as Physics, Chemistry, Biochemistry, Medicine, Biology, Math and other exact sciences.

You can see how the people at Intel did exactly that from the beginning. For example, the very fact of naming CPUs with numbers like 8086, 8087, 8088, 486, 586, lends itself to directly identify the CPU itself from the instructions, like CPUID, as efficiently as possibly conceivable.

So the court case to forbid naming and patenting numbers as CPU names isn't even about that.
Lawyers didn't understand that it's necessary to make the architecture efficient and directly identifiable at run time, not an attempt to patent a number.

We have binary numbers and bitwise optimizations, so why not use them for things like assigning I/O ports with maximum efficiency? The standard VGA is a device which has some ports assigned in this way to select between Mono and Color registers.


And other things like that, should keep seeking for the most scientifically and universally efficient way to define exact processes, like I/O ports and protocols for all imaginable device types and combinations, as much as chemical reactions, here they should implement bitwise and electronics reactions as a consensus. Maybe that would even precipitate quantum computer effects for being all based on physical laws more than on random software method implementations.
YouTube:
http://youtube.com/@AltComp126

My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... ip?viasf=1
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Exterminate All Operating System Abstractions

Post by Rusky »

Brendan wrote:Instead of getting rid of abstractions, an exokernel has the exact same abstractions implemented in a library (that no sane programmer ever bothers to change, and no sane person should ever be expected to change), plus an additional layer of abstractions in the kernel to "securely multiplex".
This is demonstrably false: on today's operating systems, people already bother to change these abstractions, with user-space threading (both stackful and stackless), non-file-based object stores (databases, relational and otherwise; disk images for VMs), user-space network stacks, containers, etc. on top of the kernel's abstractions. This is not because they're insane, but because the kernel's abstractions are not a good fit for ever application (which is not to say they're bad, just that they're too specific).

So we already have the "twice as many abstractions" that you claim exokernels would introduce, though really it's even more than that because there are many different designs for the user-space abstractions. Further, we're starting to improve the situation by designing new APIs that multiplex hardware without imposing their own policies.

For example, graphics APIs like OpenGL and older versions of Direct3D do too much for their clients - memory management, synchronization, etc. - which leads to lots of pointless heuristics and nonsense like game-specific code in drivers. Newer graphics APIs like Vulkan, Direct3D 12, Metal, and even a newer OpenGL style called AZDO instead hand these responsibilities to the client, so there's less abstraction and conflicting heuristics on both sides. This leads to real performance improvements with more straightfoward code in e.g. the recent game DOOM.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Exterminate All Operating System Abstractions

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Instead of getting rid of abstractions, an exokernel has the exact same abstractions implemented in a library (that no sane programmer ever bothers to change, and no sane person should ever be expected to change), plus an additional layer of abstractions in the kernel to "securely multiplex".
This is demonstrably false: on today's operating systems, people already bother to change these abstractions, with user-space threading (both stackful and stackless), non-file-based object stores (databases, relational and otherwise; disk images for VMs), user-space network stacks, containers, etc. on top of the kernel's abstractions. This is not because they're insane, but because the kernel's abstractions are not a good fit for ever application (which is not to say they're bad, just that they're too specific).
All of these things are built on top of abstractions that the kernel provides. For an exo-kernel, they'd be built on top of the "kernel library abstraction" (that is built on top of the extra "securely multiplex abstraction").

Now imagine if each process replaced/ignored the "kernel library abstraction" and did its own thing. 5 different processes try to read 5 different files at the same time (and a sixth process is trying to write to one of those files) - where is the file system cache, and what enforces IO priorities? It can't be in the kernel (it only securely multiplexes!) and it can't be in the "kernel library" because all processes replaced/ignored it.

For everything that requires centralised management (file caches, DNS caches, file systems, IO priorities, ..., figuring out which process should be allowed access to a received network packet, etc) you need something to "centrally manage". You can put it in the kernel (monolithic) or put it in a process of its own (micro-kernel) or put it in a library that can't be changed. You can't put it in a library that can be changed because then there's chaos (all processes are free to ignore policies that ensure correct management of a shared resources); unless you have extremely strict rules like "you can replace the file systems abstraction, but it must maintain the global file cache like <this> and must do global IO priorities like <that> to ensure it cooperates with other processes properly" with no way to enforce those rules (and prevent chaos).
Rusky wrote:So we already have the "twice as many abstractions" that you claim exokernels would introduce, though really it's even more than that because there are many different designs for the user-space abstractions. Further, we're starting to improve the situation by designing new APIs that multiplex hardware without imposing their own policies.

For example, graphics APIs like OpenGL and older versions of Direct3D do too much for their clients - memory management, synchronization, etc. - which leads to lots of pointless heuristics and nonsense like game-specific code in drivers. Newer graphics APIs like Vulkan, Direct3D 12, Metal, and even a newer OpenGL style called AZDO instead hand these responsibilities to the client, so there's less abstraction and conflicting heuristics on both sides. This leads to real performance improvements with more straightfoward code in e.g. the recent game DOOM.
Graphics has always been bad (too much communication between game and driver, with too little opportunity for either to optimise). They've been trying to resolve this problem the wrong way (shifting half the driver into the game) for too long already. It will continue to cause problems, like the OS being unable to recover when the game crashes, games that require a specific video card (and break when you install/use a different video card), nothing working when you try to stretch the same game across 2 or more monitors (that use 2 or more video cards) and everything breaking when you expect a multi-tasking OS to actually multi-task (run 2 or more games).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Exterminate All Operating System Abstractions

Post by Rusky »

Brendan wrote:All of these things are built on top of abstractions that the kernel provides. For an exo-kernel, they'd be built on top of the "kernel library abstraction" (that is built on top of the extra "securely multiplex abstraction").
No, the reason to switch to an exokernel is specifically to bypass that stuff- they would all be built directly above the secure multiplexing layer.
Brendan wrote:Now imagine if each process replaced/ignored the "kernel library abstraction" and did its own thing. 5 different processes try to read 5 different files at the same time (and a sixth process is trying to write to one of those files) - where is the file system cache, and what enforces IO priorities? It can't be in the kernel (it only securely multiplexes!) and it can't be in the "kernel library" because all processes replaced/ignored it.
It can be in the kernel- that's what secure multiplexing is.
Brendan wrote:For everything that requires centralised management (file caches, DNS caches, file systems, IO priorities, ..., figuring out which process should be allowed access to a received network packet, etc) you need something to "centrally manage".
And if you had ever read any of MIT's actual papers, you'd realize that exokernels do all of those things just fine.
Brendan wrote:the OS being unable to recover when the game crashes
Windows has been able to recover when games crash (including in the driver) for years.
Brendan wrote:games that require a specific video card (and break when you install/use a different video card)
Such as?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Exterminate All Operating System Abstractions

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:All of these things are built on top of abstractions that the kernel provides. For an exo-kernel, they'd be built on top of the "kernel library abstraction" (that is built on top of the extra "securely multiplex abstraction").
No, the reason to switch to an exokernel is specifically to bypass that stuff- they would all be built directly above the secure multiplexing layer.
Sure, the reason is to bypass all this in theory. In practice it doesn't work beyond research toys.
Rusky wrote:
Brendan wrote:Now imagine if each process replaced/ignored the "kernel library abstraction" and did its own thing. 5 different processes try to read 5 different files at the same time (and a sixth process is trying to write to one of those files) - where is the file system cache, and what enforces IO priorities? It can't be in the kernel (it only securely multiplexes!) and it can't be in the "kernel library" because all processes replaced/ignored it.
It can be in the kernel- that's what secure multiplexing is.
Oh, so now an exo-kernel is "secure multiplexing, plus various caches, plus thread priorities, plus IO priorities, plus ...." - essentially identical to a monolithic kernel in every way except name? Yay!
Rusky wrote:
Brendan wrote:For everything that requires centralised management (file caches, DNS caches, file systems, IO priorities, ..., figuring out which process should be allowed access to a received network packet, etc) you need something to "centrally manage".
And if you had ever read any of MIT's actual papers, you'd realize that exokernels do all of those things just fine.
I asked "where" and your answer is "it does". Is that even supposed to make sense?
Rusky wrote:
Brendan wrote:the OS being unable to recover when the game crashes
Windows has been able to recover when games crash (including in the driver) for years.
In some cases, sure. More often a game crashes and you end up with a invisible dialog box behind a DirectX context with no way to interact with it, where even "control+alt+delete" doesn't work.
Rusky wrote:
Brendan wrote:games that require a specific video card (and break when you install/use a different video card)
Such as?
Such as every game that tries to do "fancy" 3D graphics that was been released for PC for the last ~20 years. For one simple example (chosen merely because you mentioned the game already) the latest Doom requires "NVIDIA GTX 670 2GB/AMD Radeon HD 7870 2GB or better". Why? Because the "abstraction" used for graphics is extremely leaky. The result? Multiple different web sites dedicated to helping people figure out if games will run (plus an uncountable number of people asking if something will/won't work on various forums and social media sites); plus an entire "games console" industry that owes it's existence to solving the "will the game run" problem by ensuring everyone has the exact same hardware that game developers tested on.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Exterminate All Operating System Abstractions

Post by Rusky »

Brendan wrote:Oh, so now an exo-kernel is "secure multiplexing, plus various caches, plus thread priorities, plus IO priorities, plus ...." - essentially identical to a monolithic kernel in every way except name? Yay!
No, secure multiplexing is not a separate thing from priorities- priorities are part of what makes it secure. The difference is that an exokernel only enforces what is necessary for security (e.g. preventing something with low priority from DOSing something with high priority), while letting userspace bypass things that are not (e.g. forcibly swapping out a process's pages pages to disk when it may have had others that could have just been discarded).
Brendan wrote:I asked "where" and your answer is "it does". Is that even supposed to make sense?
I'd already answered "where" with "in the kernel" and "see the MIT papers for more details."
Brendan wrote:In some cases, sure. More often a game crashes and you end up with a invisible dialog box behind a DirectX context with no way to interact with it, where even "control+alt+delete" doesn't work.
Windows hasn't done this for years. Instead, it has restarted crashed video drivers without losing anything but the game whose state was corrupted.
Brendan wrote:Such as every game that tries to do "fancy" 3D graphics that was been released for PC for the last ~20 years. For one simple example (chosen merely because you mentioned the game already) the latest Doom requires "NVIDIA GTX 670 2GB/AMD Radeon HD 7870 2GB or better". Why? Because the "abstraction" used for graphics is extremely leaky. The result? Multiple different web sites dedicated to helping people figure out if games will run (plus an uncountable number of people asking if something will/won't work on various forums and social media sites); plus an entire "games console" industry that owes it's existence to solving the "will the game run" problem by ensuring everyone has the exact same hardware that game developers tested on.
This is not at all what you claimed, which was "games that require a specific video card (and break when you install/use a different video card)." This is about requiring enough resources, not a particular piece of hardware- no different from requiring a minimum amount of RAM (which can come from any vendor) or a minimum amount of disk space (which can come from any vendor) or a minimum CPU speed (which can come from any vendor). It doesn't even require a specific GPU architecture the way most programs require a specific CPU architecture!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Exterminate All Operating System Abstractions

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Oh, so now an exo-kernel is "secure multiplexing, plus various caches, plus thread priorities, plus IO priorities, plus ...." - essentially identical to a monolithic kernel in every way except name? Yay!
No, secure multiplexing is not a separate thing from priorities- priorities are part of what makes it secure. The difference is that an exokernel only enforces what is necessary for security (e.g. preventing something with low priority from DOSing something with high priority), while letting userspace bypass things that are not (e.g. forcibly swapping out a process's pages pages to disk when it may have had others that could have just been discarded).
As far as I understand it; for something like a disk driver the kernel would only manage "who owns which blocks" (and possibly provide some way to revoke ownership for DOS purposes). As soon as you start doing IO priorities in the kernel then you end up with queues of pending requests in the kernel too (otherwise kernel can't choose which request has highest priority) and you end up with a full blown disk driver with all the "policy" you'd have in monolithic kernel.
Rusky wrote:
Brendan wrote:I asked "where" and your answer is "it does". Is that even supposed to make sense?
I'd already answered "where" with "in the kernel" and "see the MIT papers for more details."
In that case; "you're wrong, see the Internet for more details". Can you see how my answer (like yours) is not an answer at all?

If you look at the MIT paper/s you'll see they describe exactly what I've been describing - "secure multiplexing only, with no policy (and no IO priorities and no file system caches, etc) in the kernel at all".
Rusky wrote:
Brendan wrote:In some cases, sure. More often a game crashes and you end up with a invisible dialog box behind a DirectX context with no way to interact with it, where even "control+alt+delete" doesn't work.
Windows hasn't done this for years. Instead, it has restarted crashed video drivers without losing anything but the game whose state was corrupted.
Skyrim (running on Windows 8 ) did this to me twice this week.

Note: The only way I've found to recover from this (without rebooting) is to sign out (because I can't kill the game any other way, including from the task manager), then sign back in again (and wait for Steam to re-authenticate before restarting Skyrim).
Rusky wrote:
Brendan wrote:Such as every game that tries to do "fancy" 3D graphics that was been released for PC for the last ~20 years. For one simple example (chosen merely because you mentioned the game already) the latest Doom requires "NVIDIA GTX 670 2GB/AMD Radeon HD 7870 2GB or better". Why? Because the "abstraction" used for graphics is extremely leaky. The result? Multiple different web sites dedicated to helping people figure out if games will run (plus an uncountable number of people asking if something will/won't work on various forums and social media sites); plus an entire "games console" industry that owes it's existence to solving the "will the game run" problem by ensuring everyone has the exact same hardware that game developers tested on.
This is not at all what you claimed, which was "games that require a specific video card (and break when you install/use a different video card)." This is about requiring enough resources, not a particular piece of hardware- no different from requiring a minimum amount of RAM (which can come from any vendor) or a minimum amount of disk space (which can come from any vendor) or a minimum CPU speed (which can come from any vendor). It doesn't even require a specific GPU architecture the way most programs require a specific CPU architecture!
For the old graphics APIs, newer cards normally do work, but older cards and/or drivers often don't work because of missing features. For the new graphics APIs (Direct12, Vulkan) it's too early to tell; but I do think it's going to get much worse and will cause "video card/driver too new for game" problems within a few years.

In my opinion it's all completely broken. It's like someone being worried if (e.g) Photoshop will work with their scanner, or having to buy a newer printer that's compatible with the latest word processor. These things don't happen because (e.g.) scanners and printers use a "nowhere near as leaky" abstraction; where the worst case is merely worse quality.

Basically; for graphics; it should never be a question of "will it work" and should only ever be a question of "how well does it work" (frame rate, graphics quality). That can not happen because the abstraction isn't abstract.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Exterminate All Operating System Abstractions

Post by Rusky »

Brendan wrote:As soon as you start doing IO priorities in the kernel then you end up with queues of pending requests in the kernel too (otherwise kernel can't choose which request has highest priority) and you end up with a full blown disk driver with all the "policy" you'd have in monolithic kernel.
MIT did put an IO scheduler in the kernel. What they didn't put there was anything to decide which blocks to schedule.
Brendan wrote:Basically; for graphics; it should never be a question of "will it work" and should only ever be a question of "how well does it work" (frame rate, graphics quality). That can not happen because the abstraction isn't abstract.
By that logic, programs should never be allowed to rely on CPU-side hardware features either. But go ahead and try to design a graphics API that accepts the sorts of assets we see in AAA games today and produces something at all useful on a fixed-function pipeline from the 90s- I'll wait.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Exterminate All Operating System Abstractions

Post by embryo2 »

Brendan wrote:Now imagine if each process replaced/ignored the "kernel library abstraction" and did its own thing. 5 different processes try to read 5 different files at the same time (and a sixth process is trying to write to one of those files) - where is the file system cache, and what enforces IO priorities? It can't be in the kernel (it only securely multiplexes!) and it can't be in the "kernel library" because all processes replaced/ignored it.
Now imagine Windows with drivers completely independent of the gygabytes of MS code. Next imagine MS implemented it's OS on top of the set of independent drivers. Next we, the users, look at MS code inefficiencies for our particular tasks and finally decide to implement "kind of a Linux" on top of the set of independent drivers. Yes, it's not a small task, but it's comparable to any Linux distribution which was implemented by independent developers. Now you can imagine a world of "kind of a Linux" OSes on top of drivers for every consumer device invented so far (because vendors want to be compatible with MS). Each such "OS", of course, can implement basic stuff in a different manner, but each OS's goal is to implement everything in a consistent manner. So, every distribution should work smoothly. But if somebody wants to make his life worse he is absolutely free to have such mix of components as you have described above. And no sane person then will point a finger towards any sane distribution, because it's somebody's fault and not the systematic problem of the new OS world.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Exterminate All Operating System Abstractions

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:As soon as you start doing IO priorities in the kernel then you end up with queues of pending requests in the kernel too (otherwise kernel can't choose which request has highest priority) and you end up with a full blown disk driver with all the "policy" you'd have in monolithic kernel.
MIT did put an IO scheduler in the kernel. What they didn't put there was anything to decide which blocks to schedule.
With an IO scheduler in the kernel; how can 2 different "library OSs" do IO priorities differently? For example; maybe one library wants to switch to "bandwidth optimised mode" (where it ignores IO priorities and does pure LOOK algorithm) when there's too many pending requests; and another library is trying to do "soft real-time" and wants to ensure IO for real-time threads always happens before anything else.

Surely you can see that this is impossible to resolve - either:
  • There's a global IO priority policy that can't be changed; and therefore it's impossible for different libraries to do IO priorities differently, which defeats part of the theoretical benefit of having an exo-kernel
  • The libraries are responsible for their own "local IO priorities" scheme; and "global IO priorities" can't be done, which defeats part of the theoretical benefit of having an exo-kernel
The essential problem here is that you have mutually exclusive global requirements - you can't ignore IO priorities (under load) to make one library happy while also guaranteeing IO for real time threads happens first to make another library happy.

This same "mutually exclusive global requirements" problem effects everything, not just IO priorities - CPU time scheduling, power management, all caching, swapping, physical memory management, etc.

The "mutually exclusive global requirements" problem means that, in practice, it's impossible to truly achieve the "different libraries are able to improve performance by doing things differently" theoretical benefit of exo-kernels.

Once you realise that "different libraries are able to improve performance by doing things differently" is essentially impossible (beyond superficial lip service) and shift things like IO priorities, etc into the kernel; what you're left with is just a minor variation of "monolithic kernel"; where different libraries support different "OS personalities", like POSIX, or WSL, or Wine.

In that case you end up with "identical to monolithic in every way possible (just with a different kernel API)", with POSIX and/or Win32 and/or whatever implemented as a library in the same way it has been for most monolithic kernels for as long as I can remember.

Does "monolithic with lower level kernel API" make sense? Yes, it does (and it's always annoyed me a little when beginners assume "POSIX = kernel API"), but this does not make it a different kind of kernel.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply