h0bby1 wrote:
But its also the thing i sort of like with DOS is no memory protection and no multi task, so its this feeling to have all the computer all for your app, and programming the hardware directly , or via bios which was more the true os there.
Urgh, you have low standards
BIOS was no true OS.
The IBM PS/2 ABIOS was an attempt to make BIOS drivers work with multi-tasking OSes, but I think it sank without a trace along with the rest of the PS/2 (except the keyboard/mouse interface).
But DOS provided so much more than no memory protection. You could also only easily access ~640K of RAM directly, so it limited your applications to the lower 1MB, unless you inserted a protected mode shim (the DOS extender) to give you access to XMS or unreal mode. With a DOS extender, you've just lost your direct hardware access, and again have something in the way. Might as well go the whole hog and have a proper OS.
h0bby1 wrote:
Bit like retro game console with barely any OS at all where your program is pretty much the only thing running on the machine, with full exclusive access to the hardware/memory/cpu etc except for few reserved zones.
That can be done with UNIX as well.
Once you map MMIO into user space, you can run hardware at basically full speed.
PlayStation 3/4 at least uses a modified BSD as it's operating system, and nobody would deny the UNIX based SGI systems handled their GFX hardware optimally. They were amazing machines amongst their contemporaries.
And my X server or Wayland runs in user space with full hardware acceleration.
h0bby1 wrote:
My plan is more alongside something very much like COM at very low level, to have more runtime garantee on modules interface, garbage collection as well at very low level, in order to bootstrap some high level looking environnement with garbage collection, type reflexion at least at module interface level, easy serialisation ( json etc ) , some sort of construction akin to coroutines/frp to manage asynchronous code, continuations etc easily very early in the kernel.
My OS has something similar to COM. I have interfaces, which can be queried using an interface identifer using a interface->query() function. It has garbage collection as well, so I don't need the Addref and Release components of the IUnknown, object life cycle is decided based on being accessible from the GC roots.
It's all in C, which is kinda painful with the COM stuff, but the GC removes the whole world of pain that is reference counting COM objects.
Still a WIP, but my OS is here:
http://thewrongchristian.org.uk:8082/wi ... ple+Kernel
The GC:
http://thewrongchristian.org.uk:8082/ar ... 7305e45a43
COM is here:
http://thewrongchristian.org.uk:8082/ar ... ccbb2c27cd
But mostly macro based, so its use is dotted about the source. For example, this is my binary search tree implementation of my map_t interface:
http://thewrongchristian.org.uk:8082/ar ... 1daa41a572
It's all a bit of a mess, and especially the COM is not yet documented, as I'm still ironing out the kinks.
h0bby1 wrote:
And full access to the hardware as much as possible, maybe going micro kernel path with everything in userland, managed by top level applications, instead of big monolithic kernel trying to mannage everything in a centralised way, being seggregated away from userland and applications and full exclusive access to the hardware.
Maybe some kind of sandboxed feature to be able to run unstrusted code.
UNIX can do all that.
h0bby1 wrote:
But thats the one thing where is still find Windows superior to unix is the adoption of component model at very low level, which allow différents version of a modules on the system with runtime detection to find the one who implement the desired interface.
That's not a UNIX thing. A UNIX kernel could have a COM based driver interface just fine.
h0bby1 wrote:
I guess on linux they can get away with this by assuming you are going to compile everything that run on the system, and access to all the sources so its header files + compiler magic who is going to make everything link properly, but almost no runtime check and poor binary compatibility.
It can also allow easily RPC kind of like DCOM to expose module interface on the network for distributed systems.
Are you talking driver binary compatibility? Yes, of course that's not a problem for Linux, if you're compiling from source (or your distribution does it for you, I can't remember the last time I ran a self compiled Linux kernel.)
But I've just had to retire a perfectly functional computer for Windows 10, because its GPU doesn't have compatible drivers. Runs fine in Linux though.
And of course, Windows is also a proper OS, so your user space is just as isolated from the hardware as it would be in UNIX.
People (I include myself in that) implement UNIX-like OSes because the system call API is relatively small and simple. Everything is a file covers the vast majority of use cases you'd need in application software. Where everything is a file doesn't work, you can add non-standard extensions, like Linux has for io_uring. But the typical OS hobbyist is unlikely to need to support the scalability improvements that API brings in I/O, and is more likely to be happy to run a reasonably complete set of shell commands. And other people have already implemented that user space for you, so those interested more in the kernel (like us) can concentrate on fiddling close to the metal.
Don't make the mistake of confusing what UNIX (as an API and philosophy) provides versus how UNIX (the myriad of projects and vendors) is implemented.
UNIX like OSes cover everything from lowest of low level, single address space, no/minimal tasking embedded systems (stuff like VxWorks) to the vast multi-CPU behemouths from the likes of IBM and Oracle.