Reinventing Unix is not my problem

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Octocontrabass
Member
Member
Posts: 5501
Joined: Mon Mar 25, 2013 7:01 pm

Re: Reinventing Unix is not my problem

Post by Octocontrabass »

Here's the GCC documentation on function attributes.

There are actually four options:
  • Protected, which is basically equivalent to exporting the symbol
  • Hidden, which is basically equivalent to not exporting the symbol
  • Default, which is basically equivalent to exporting and then re-importing the symbol so someone else can override it
  • Internal, which is ABI-specific but usually makes pointers to the symbol invalid outside of your library
Ethin
Member
Member
Posts: 625
Joined: Sun Jun 23, 2019 5:36 pm
Location: North Dakota, United States

Re: Reinventing Unix is not my problem

Post by Ethin »

nexos wrote:
vvaltchev wrote:I'd be curious about that too! Just, if you aim for speed and simplicity a monolithic kernel is still the way to go, IMHO.
Yeah, I was going to do a microkernel, until I figured out just how complex the boot process was going to be, how slow it probably was going to be, so I decided to make a hybrid-esque kernel.
A monolithic kernel will always be the fastest but has anyone actually written a fully functional microkernel using modern processor features to benchmark how fast it is? I know that Google is doing that and they've achieved some pretty good results or so I've heard.
rdos
Member
Member
Posts: 3269
Joined: Wed Oct 01, 2008 1:55 pm

Re: Reinventing Unix is not my problem

Post by rdos »

Korona wrote: In general, ELF is a bit more sane than PE with regards to dynamic linking. (Ever tried to build a DLL that uses a non-C ABI, for example by exposing funftions that take complicated C++ classes? It's a mess.)
I don't think it's a good idea to export C++ classes from DLLs. Those should be internal only and referenced with handles or similar. This is a situation where encapsulation is extra important. You don't want DLLs to be dependent on versions, just like you don't want syscalls to be.
User avatar
Skyz
Posts: 11
Joined: Tue Mar 16, 2021 2:00 pm
Libera.chat IRC: Skyz
Contact:

Re: Reinventing Unix is not my problem

Post by Skyz »

Every OSdever should pick up "Elements of Computing Systems" and build a spartan 16 bit computer without using the tools from the wiki, this is a non-proprietary from scratch solution that builds a modern computer which plays Tetris and is fully extensible.


Look at this project, they used the Hack CPU and built it on an FPGA board and it's got Tetris on it. Nothing like Unix but fully a computer, more of a gameboy

Also 9front is an api that is not unix and you can adopt the principles of everything being a file from there.
Last edited by Skyz on Mon Jul 19, 2021 2:36 pm, edited 3 times in total.
Image

Plan 9 | ReactOS Hybrid Kernel_Preview
http://woodmann.com/fravia/godwin1.htm
rdos
Member
Member
Posts: 3269
Joined: Wed Oct 01, 2008 1:55 pm

Re: Reinventing Unix is not my problem

Post by rdos »

Skyz wrote:Every OSdever should pick up "Elements of Computing Systems" and build a spartan 16 bit computer without using the tools from the wiki, this is a non-proprietary from scratch solution that builds a modern computer which plays Tetris and is fully extensible.


Look at this project, they used the Hack CPU and built it on an FPGA board and it's got Tetris on it. Nothing like Unix but fully a computer, more of a gameboy

Also 9front is an api that is not unix and you can adopt the principles of everything being a file from there.
If you are using an FPGA anyway, I'd go with Verilog rather than poor CPU emulators within the FPGA. Actually, I did when I built my ADC which collects data up to 750 Msamples/sec on two channels and streams it in realtime to my OS over PCIe. Emulated OSes on the FPGA simply cannot handle this.

So, yes, I think OS developpers should create hardware devices, preferently using FPGAs so they understand how the device side of modern PCI devices operate.
kurtmweber
Posts: 10
Joined: Tue Aug 18, 2020 6:55 pm

Re: Reinventing Unix is not my problem

Post by kurtmweber »

zaval wrote:
It influenced the design of all the successful operating systems, including NT.
Given David Cutler's dislike to UNIX, that's hardly a more, than far fetched, wishful thinking. But yeah, since noone really knows, what the "UNIX philosophy" apart from banging keyboard in vim/emacs, telling "everything is file" mantra and calling administrator account "root" is, then with a bit of will, one can proclaim even the earlier appeared OSs, "influenced" by UNIX.
A lot of the so-called "Unix philosophy" is really just ad-hoc decisions and work-arounds to conform to the particular strengths and weaknesses of (a) the PDP-7 that UNIX was born on, and (b) the UNIX codebase itself as Ritchie and Thompson iteratively developed it into successive early versions of UNIX. The fork-exec model is an example of this: it has its origins not in some abstract principle of a better way to spawn new processes; it came about because when R & T decided to add the ability to arbitrarily create new processes to UNIX, it proved to be a very easy way to graft it onto what they had at that point (indeed, they were able to do it in 27 lines of assembly!). Ritchie also points out that it was hardly original to UNIX--Ken Thompson had already encountered it on the Berkeley Timeshare System.
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Reinventing Unix is not my problem

Post by eekee »

I mostly agree, but they did have a bit of an obsession with simple interfaces. From everything-is-a-file to Go (which has been criticized for being too simple,) it was a theme from the beginning to the end of their work. This simplicity has been opposed by other programmers from as early as the Programmer's Workbench system of 1972 to Go which was beset by requests for generics from launch. Unix was formerly contrasted with MVS, but over the 90s and 00s Unix became just like MVS, having all the features.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
MarcoAmara
Posts: 2
Joined: Mon Sep 20, 2021 2:00 am

Re: Reinventing Unix is not my problem

Post by MarcoAmara »

spotracite wrote:After looking through these forums and the Wiki, I've come to the conclusion that "Unix" is a very divisive term among this community. On Side A, we have the Unix haters (myself included) who have decided that, for some reason or another, Unix is an architectural dead-end and should be replaced with more modern technology (most often, Windows NT). On Side B, we have the Unix lovers (again, no offense) who have decided that, for some reason or another, Unix is the pinnacle of operating system architecture and should either be expanded on by 'X' feature or is already perfect. Personally, I believe that Unix isn't great, but isn't terrible - some design choices are very weird, some are downright bad, but in general I think that Unix is a good middle-of-the-road system (especially for something written in just a few months).

A specific quote, however, from 1987 on the Usenet, draws my attention: "Those who do not understand Unix are condemned to reinvent it, poorly."

This quote brings up a major issue (in my opinion) about the OSDev community - Most hobby operating systems lean towards the Unix-like variety. Why is this? I think it can be argued that it's easy to understand, the simplicity makes its implementation easy, or perhaps POSIX is just appealing because of how well defined it is. However, I have a different argument for this: Hobby operating systems, especially beginner operating systems, skew towards poorly reinventing Unix because that is what we teach. Look across the OSDev Wiki - many articles are written from the perspective of someone who has written a Unix-like operating system. Look at the Forum - many posts here ask for advice on architectural designs and most of the time, Unix is the answer. Heck, look at the tutorials you can use! We've got Xv6, which is a literal descendant of Unix, we've got Minix which is Unix with a microkernel, we've got JamesM's kernel development (Roll your own toy UNIX-clone OS!). My point, dear reader, is that our community teaches new developers to start writing a Unix-like operating system. So, to conclude my main argument, it's not by choice that many programmers write a Unix-like operating system, but simply because it's what's taught. Even if you chose not to, unless you really understand what you're avoiding, you'll end up reusing a lot of Unix material.

Now, you may be thinking "Alright, I see what you mean, but why does this matter?" In all honesty, I don't know. I just felt like posting about it since it seemed like an interesting perspective. I should say that I mean no offense to people writing Unix-like operating systems nor any offense to those who criticize Unix. My take is that Unix has gotten bland and frankly used up in terms of cool things, so my project will get boring quick if I use it. I'd be interested to hear why you did (or didn't) write a Unix-like system, and if you didn't, what ideas did you go for?
For beginners keep pushing.Don't give up.Code code and rest,it's not bad you deserve it.
MarcoAmara
Posts: 2
Joined: Mon Sep 20, 2021 2:00 am

Re: Reinventing Unix is not my problem

Post by MarcoAmara »

Addition of ioctl interface for everything that is not file-related is annoying at first but it's great.
h0bby1
Member
Member
Posts: 240
Joined: Wed Aug 21, 2013 7:08 am

Re: Reinventing Unix is not my problem

Post by h0bby1 »

Hello :)

Im also myself not big fan of Unix system, i learned coding from amigaist so thats probably why, and i wonder as well why all the focus on unix-like.

For me the reason is mostly that unixes always looks very dull, and were made to abstract the hardware as much as possible alongside with C, probably in a network/internet environnement where conformity to network protocol were more important than hardware specifics.

Im always under the impression that as long as what you want is mannaging networks protocols and filesystem, the unix abstraction works fine, when need to handle complex set of devices, it become more problematic and the file abstraction doesnt really fit that well.

But its also the thing i sort of like with DOS is no memory protection and no multi task, so its this feeling to have all the computer all for your app, and programming the hardware directly , or via bios which was more the true os there.

Bit like retro game console with barely any OS at all where your program is pretty much the only thing running on the machine, with full exclusive access to the hardware/memory/cpu etc except for few reserved zones.

My plan is more alongside something very much like COM at very low level, to have more runtime garantee on modules interface, garbage collection as well at very low level, in order to bootstrap some high level looking environnement with garbage collection, type reflexion at least at module interface level, easy serialisation ( json etc ) , some sort of construction akin to coroutines/frp to manage asynchronous code, continuations etc easily very early in the kernel.

And full access to the hardware as much as possible, maybe going micro kernel path with everything in userland, managed by top level applications, instead of big monolithic kernel trying to mannage everything in a centralised way, being seggregated away from userland and applications and full exclusive access to the hardware.

Maybe some kind of sandboxed feature to be able to run unstrusted code.

But thats the one thing where is still find Windows superior to unix is the adoption of component model at very low level, which allow différents version of a modules on the system with runtime detection to find the one who implement the desired interface.

I guess on linux they can get away with this by assuming you are going to compile everything that run on the system, and access to all the sources so its header files + compiler magic who is going to make everything link properly, but almost no runtime check and poor binary compatibility.

It can also allow easily RPC kind of like DCOM to expose module interface on the network for distributed systems.
thewrongchristian
Member
Member
Posts: 422
Joined: Tue Apr 03, 2018 2:44 am

Re: Reinventing Unix is not my problem

Post by thewrongchristian »

h0bby1 wrote: But its also the thing i sort of like with DOS is no memory protection and no multi task, so its this feeling to have all the computer all for your app, and programming the hardware directly , or via bios which was more the true os there.
Urgh, you have low standards :)

BIOS was no true OS.

The IBM PS/2 ABIOS was an attempt to make BIOS drivers work with multi-tasking OSes, but I think it sank without a trace along with the rest of the PS/2 (except the keyboard/mouse interface).

But DOS provided so much more than no memory protection. You could also only easily access ~640K of RAM directly, so it limited your applications to the lower 1MB, unless you inserted a protected mode shim (the DOS extender) to give you access to XMS or unreal mode. With a DOS extender, you've just lost your direct hardware access, and again have something in the way. Might as well go the whole hog and have a proper OS.
h0bby1 wrote: Bit like retro game console with barely any OS at all where your program is pretty much the only thing running on the machine, with full exclusive access to the hardware/memory/cpu etc except for few reserved zones.
That can be done with UNIX as well.

Once you map MMIO into user space, you can run hardware at basically full speed.

PlayStation 3/4 at least uses a modified BSD as it's operating system, and nobody would deny the UNIX based SGI systems handled their GFX hardware optimally. They were amazing machines amongst their contemporaries.

And my X server or Wayland runs in user space with full hardware acceleration.
h0bby1 wrote: My plan is more alongside something very much like COM at very low level, to have more runtime garantee on modules interface, garbage collection as well at very low level, in order to bootstrap some high level looking environnement with garbage collection, type reflexion at least at module interface level, easy serialisation ( json etc ) , some sort of construction akin to coroutines/frp to manage asynchronous code, continuations etc easily very early in the kernel.
My OS has something similar to COM. I have interfaces, which can be queried using an interface identifer using a interface->query() function. It has garbage collection as well, so I don't need the Addref and Release components of the IUnknown, object life cycle is decided based on being accessible from the GC roots.

It's all in C, which is kinda painful with the COM stuff, but the GC removes the whole world of pain that is reference counting COM objects.

Still a WIP, but my OS is here:

http://thewrongchristian.org.uk:8082/wi ... ple+Kernel

The GC:
http://thewrongchristian.org.uk:8082/ar ... 7305e45a43

COM is here:
http://thewrongchristian.org.uk:8082/ar ... ccbb2c27cd

But mostly macro based, so its use is dotted about the source. For example, this is my binary search tree implementation of my map_t interface:
http://thewrongchristian.org.uk:8082/ar ... 1daa41a572

It's all a bit of a mess, and especially the COM is not yet documented, as I'm still ironing out the kinks.
h0bby1 wrote: And full access to the hardware as much as possible, maybe going micro kernel path with everything in userland, managed by top level applications, instead of big monolithic kernel trying to mannage everything in a centralised way, being seggregated away from userland and applications and full exclusive access to the hardware.

Maybe some kind of sandboxed feature to be able to run unstrusted code.
UNIX can do all that.
h0bby1 wrote: But thats the one thing where is still find Windows superior to unix is the adoption of component model at very low level, which allow différents version of a modules on the system with runtime detection to find the one who implement the desired interface.
That's not a UNIX thing. A UNIX kernel could have a COM based driver interface just fine.
h0bby1 wrote: I guess on linux they can get away with this by assuming you are going to compile everything that run on the system, and access to all the sources so its header files + compiler magic who is going to make everything link properly, but almost no runtime check and poor binary compatibility.

It can also allow easily RPC kind of like DCOM to expose module interface on the network for distributed systems.
Are you talking driver binary compatibility? Yes, of course that's not a problem for Linux, if you're compiling from source (or your distribution does it for you, I can't remember the last time I ran a self compiled Linux kernel.)

But I've just had to retire a perfectly functional computer for Windows 10, because its GPU doesn't have compatible drivers. Runs fine in Linux though.

And of course, Windows is also a proper OS, so your user space is just as isolated from the hardware as it would be in UNIX.

People (I include myself in that) implement UNIX-like OSes because the system call API is relatively small and simple. Everything is a file covers the vast majority of use cases you'd need in application software. Where everything is a file doesn't work, you can add non-standard extensions, like Linux has for io_uring. But the typical OS hobbyist is unlikely to need to support the scalability improvements that API brings in I/O, and is more likely to be happy to run a reasonably complete set of shell commands. And other people have already implemented that user space for you, so those interested more in the kernel (like us) can concentrate on fiddling close to the metal.

Don't make the mistake of confusing what UNIX (as an API and philosophy) provides versus how UNIX (the myriad of projects and vendors) is implemented.

UNIX like OSes cover everything from lowest of low level, single address space, no/minimal tasking embedded systems (stuff like VxWorks) to the vast multi-CPU behemouths from the likes of IBM and Oracle.
h0bby1
Member
Member
Posts: 240
Joined: Wed Aug 21, 2013 7:08 am

Re: Reinventing Unix is not my problem

Post by h0bby1 »

Yes unix had also corba, of the little i used of sunOS you could also have différents subtree with differents version of the system and bit more "meta" than what you have on linux.

And yes i guess you could also expose a number of things to userspace with an unix but that wouldnt be very unix like no ?

I ended up developping my own binary format that can be exported from elf/dll has dynamic linking, and could easily add something like haskell dictionary compilation to add meta to exported symbols. To have COM like features directly in the export table.

It can already link binaries made with gcc or visual studio in freestanding. Which was something i wanted to avoid trouble with compiler version and having high degree of binary compatibility and runtime check. Still need to add a real interface system but then im comtemplating differents option of formulating interface/ inheritence/ module system.

Maybe ML style, emarald, haskell dictionaries or something with advanced typing system that can deal well with distributed systems.

I dont like COM exactly as it is, but something in the idea with runtime type reflexion on interface for safe binary linking, but a more fonctional style typing would look more appealing to me, especially if it can also work with coroutines/frp.

To be able to create chain of asynchronous components potentially directly chained down from hardware irqs.

Im not even sure if they have events or such in COM but i think corba has it.

Something that could export interupt end point to chain another component out of it with a standard interface.
Post Reply