Is OS development in threat of extinction?
Is OS development in threat of extinction?
This might sound preposterous, but hear me out.
Currently, all the wiki content and various OS tutorials reach a certain level of OS advancement. Past that, it’s basically mandatory that you learn how to RTFM. This isn’t a problem in itself, though.
The current trend of the OS development community makes it seem like the field of OS development is rapidly expanding. However, that is simply quantity over quality.
Exhibit A: as of posting, the last post on the “OS Design and Theory” forum was a month ago. The latest post on the “OS Development” forum (where people usually ask for help) was today. Guess what the topic of the post was? It’s an easy guess. paging
Now obviously, I am not innocent in this regard at all. A couple years ago, I was no better than anyone else who asks for PAE help on this forum.
Why is this a problem? It seems that, day by day, the OS development scene is flooded with more novices than it can accommodate. Most of the senior OS developer types, the “I was there when FSF was born” types, will in the next ten years either retire, die, or cognitively decline.
These senior OS developers have been carrying the entire hobbyist scene on their backs with virtually no compensation or recognition.
The typical OS developer these days follows a typical routine. 1. Do the Bare Bones. 2. Install Limine. 3. Struggle with basic CPU and IO configuration. 4. If they make it past that hurdle, then they write a scheduler, copying UNIX kernel designs. 5. Maybe port a libc. 6. Maybe port gcc. 7. Sit back and do nothing.
This is why Terry Davis, for all his faults, deserves recognition as a savant OS developer. He wrote the HolyC compiler, then wrote TempleOS completely from scratch (no bootloader or libc), and not only did it all alone, but he made an extremely unique (if not limited) design that was actually reasonably sensible.
My main point is that all of the extremely experienced OS developers are fading away, replaced by a new generation of socially incompetent, narcissistic teenagers who do not actually fully understand the concepts of OS development and will likely not be competent enough to keep the OS development community alive and healthy.
I do not intend on arguing this from a position of superiority. I simply argue this from a perspective that I perceive to be full of clarity. I am definitely no senior OS developer, and none of my work measures up to what most moderators on these forums have accomplished. Yet, I worry that, in due enough time, the knowledge and wisdom that is required to become a OS developer with the same skill level as a London cab driver will simply disappear alongside the prolifically technical minds who harbor them.
Thoughts?
Currently, all the wiki content and various OS tutorials reach a certain level of OS advancement. Past that, it’s basically mandatory that you learn how to RTFM. This isn’t a problem in itself, though.
The current trend of the OS development community makes it seem like the field of OS development is rapidly expanding. However, that is simply quantity over quality.
Exhibit A: as of posting, the last post on the “OS Design and Theory” forum was a month ago. The latest post on the “OS Development” forum (where people usually ask for help) was today. Guess what the topic of the post was? It’s an easy guess. paging
Now obviously, I am not innocent in this regard at all. A couple years ago, I was no better than anyone else who asks for PAE help on this forum.
Why is this a problem? It seems that, day by day, the OS development scene is flooded with more novices than it can accommodate. Most of the senior OS developer types, the “I was there when FSF was born” types, will in the next ten years either retire, die, or cognitively decline.
These senior OS developers have been carrying the entire hobbyist scene on their backs with virtually no compensation or recognition.
The typical OS developer these days follows a typical routine. 1. Do the Bare Bones. 2. Install Limine. 3. Struggle with basic CPU and IO configuration. 4. If they make it past that hurdle, then they write a scheduler, copying UNIX kernel designs. 5. Maybe port a libc. 6. Maybe port gcc. 7. Sit back and do nothing.
This is why Terry Davis, for all his faults, deserves recognition as a savant OS developer. He wrote the HolyC compiler, then wrote TempleOS completely from scratch (no bootloader or libc), and not only did it all alone, but he made an extremely unique (if not limited) design that was actually reasonably sensible.
My main point is that all of the extremely experienced OS developers are fading away, replaced by a new generation of socially incompetent, narcissistic teenagers who do not actually fully understand the concepts of OS development and will likely not be competent enough to keep the OS development community alive and healthy.
I do not intend on arguing this from a position of superiority. I simply argue this from a perspective that I perceive to be full of clarity. I am definitely no senior OS developer, and none of my work measures up to what most moderators on these forums have accomplished. Yet, I worry that, in due enough time, the knowledge and wisdom that is required to become a OS developer with the same skill level as a London cab driver will simply disappear alongside the prolifically technical minds who harbor them.
Thoughts?
Re: Is OS development in threat of extinction?
What you said is not preposterous at all and I've been thinking about the exact same thing since the end of the 90s, when I started realizing that architectures are becoming more complex and documentation on the hardware more closed at each cycle. The level of complexity of a computer nowadays, even a small one like a mobile phone, is staggering. Things have evolved, not always for the best. Besides, a lot of the complexity nowadays stems for a (sometimes pathological) need for backwards compatibility. ARM didn't make things easier, as the architecture has adopted (and adapted) many of decisions (good and bad) of other architectures. Which, in part, makes sense, but it also has drawbacks. The x86-64, which appeared around 2002, if I remember correctly, also didn't make things easier, just added more complexity on top of an already complex device and it's all been downhill from there.
Writing a bootloader in under 15 minutes: https://www.youtube.com/watch?v=0E0FKjvTA0M
Re: Is OS development in threat of extinction?
A possible lack of documentation, if reality, would put all OS development expertise in the hands of a select corporate few, ultimately autocratizing all computing for good. That really scares me.BigBuda wrote:What you said is not preposterous at all and I've been thinking about the exact same thing since the end of the 90s, when I started realizing that architectures are becoming more complex and documentation on the hardware more closed at each cycle.
Skylight: https://github.com/austanss/skylight
I make stupid mistakes and my vision is terrible. Not a good combination.
NOTE: Never respond to my posts with "it's too hard".
I make stupid mistakes and my vision is terrible. Not a good combination.
NOTE: Never respond to my posts with "it's too hard".
Re: Is OS development in threat of extinction?
Which is exactly what we've been seeing for a long time. The time to be scared with that future is over. Now is the time to panic because it became reality. Fortunately, there still some ingenious people out there able to somehow reverse engineer a lot. I can't be sure, I don't have statistics and not even anecdotal evidence, but I have a deep chilling feeling that most of what is considered public knowledge nowadays was acquired this way.austanss wrote:A possible lack of documentation, if reality, would put all OS development expertise in the hands of a select corporate few, ultimately autocratizing all computing for good. That really scares me.
Writing a bootloader in under 15 minutes: https://www.youtube.com/watch?v=0E0FKjvTA0M
Re: Is OS development in threat of extinction?
I stopped panicking years ago.
Why? Because I kept finding surprising examples of openness. I also have doubts that internally-divided, competitive organizations can maintain authoritarianism for long. In business particularly, authoritarianism is quick to evaporate when it gets in the way of profit. Consider Apple's changes over the years; switching from entirely closed firmware to OpenFirmware, adopting PCI and PCIe, and even switching to a nearly-standard PC platform. Note too the many contributions to open source by the worst of authoritarian companies -- Apple and Microsoft. I've also seen reverse engineering increase the more closed hardware appears.
I'm too short on sleep to write anything in-depth, but I think there's 2 issues facing us.
1) Rising complexity. This seems to be an inescapable part of increasing power, but is it? And how much power do you need for your OS, anyway? I remember Linux doing everything I wanted with 64MB RAM and a 120MHz Pentium pro. I can certainly get quite a lot of useful work out of that hardware or even much less, and I can both innovate and enjoy my hobby. This is practically embedded hardware level these days, and embedded hardware has to be documented for the developers. Of course, I've wanted more since that old Pentium; specifically 3D and video, and I've wished for 4k resolution. The situation with these features is less encouraging, but some video codecs are better than others and hardware support is possible.
2) The glacial but inevitable demise of the PC. This one might not even be real in the sense that the situation the PC created was never as good as it seemed. The very compatibility we blame for complexity has enabled a bloom of OS development as much more has been written about the PC than any other platform. Nearly all new OSs get written for the PC platform, but then we find they run only in a handful of emulators and maybe 1 or 2 very specific laptops. The PC's openness has enabled a profusion of perpherals which need different drivers, but even with as central a component as the BIOS, "Booting on one computer is easy; booting on many computers is hard." UEFI doesn't seem to be much better in that regard.
Also remember that the platform which is, for real, emerging as the "one true OS" is POSIX which is not owned by any one company.
On a related note, how many of us have full control of our Linux systems and still have time for OS development? I've resigned myself to maintaining "ugly boxes" with a black-box OS -- Linux counts as black-box in this context just as much as Windows and Android -- while developing and running more accessible and interesting systems on other hardware and emulators. The ugly boxes can handle all the 3D, video, and most of the high-quality font rendering I need.
I want to target hardware which users can readily and affordably obtain in consistent form. I'm not yet sure which hardware, from boards marketed to hobbyists to a custom PCB supporting one of those ARM+FPGA SoCs. 9front devs made their "aijuboard" around a Xilinx ARM+FPGA SoC which supported SATA, framebuffer graphics, real ethernet and of course USB, and that was 9 years ago now.
tl;dr: We can't have everything we wish for, but the situation's not all bad. There is plenty of opportunity for OS dev.

I'm too short on sleep to write anything in-depth, but I think there's 2 issues facing us.
1) Rising complexity. This seems to be an inescapable part of increasing power, but is it? And how much power do you need for your OS, anyway? I remember Linux doing everything I wanted with 64MB RAM and a 120MHz Pentium pro. I can certainly get quite a lot of useful work out of that hardware or even much less, and I can both innovate and enjoy my hobby. This is practically embedded hardware level these days, and embedded hardware has to be documented for the developers. Of course, I've wanted more since that old Pentium; specifically 3D and video, and I've wished for 4k resolution. The situation with these features is less encouraging, but some video codecs are better than others and hardware support is possible.
2) The glacial but inevitable demise of the PC. This one might not even be real in the sense that the situation the PC created was never as good as it seemed. The very compatibility we blame for complexity has enabled a bloom of OS development as much more has been written about the PC than any other platform. Nearly all new OSs get written for the PC platform, but then we find they run only in a handful of emulators and maybe 1 or 2 very specific laptops. The PC's openness has enabled a profusion of perpherals which need different drivers, but even with as central a component as the BIOS, "Booting on one computer is easy; booting on many computers is hard." UEFI doesn't seem to be much better in that regard.
Also remember that the platform which is, for real, emerging as the "one true OS" is POSIX which is not owned by any one company.
On a related note, how many of us have full control of our Linux systems and still have time for OS development? I've resigned myself to maintaining "ugly boxes" with a black-box OS -- Linux counts as black-box in this context just as much as Windows and Android -- while developing and running more accessible and interesting systems on other hardware and emulators. The ugly boxes can handle all the 3D, video, and most of the high-quality font rendering I need.
I want to target hardware which users can readily and affordably obtain in consistent form. I'm not yet sure which hardware, from boards marketed to hobbyists to a custom PCB supporting one of those ARM+FPGA SoCs. 9front devs made their "aijuboard" around a Xilinx ARM+FPGA SoC which supported SATA, framebuffer graphics, real ethernet and of course USB, and that was 9 years ago now.
tl;dr: We can't have everything we wish for, but the situation's not all bad. There is plenty of opportunity for OS dev.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: Is OS development in threat of extinction?
Published on Youtube while I was typing all that:
Corporate Open Source is Dead
Corporate Open Source is Dead
I think my point stands, but I thought I ought to link this. Haven't watched it; too tired, but Jeff Geerling seems like a sensible guy.video description wrote: Nobody likes being rugpulled. But lately, it's going around like a virus.
Why are so many former open source darlings selling out or relicensing? And is there anything you can do to fight back against these anti-open-source practices?
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: Is OS development in threat of extinction?
I have never heard of this rugpull phenomenon before. Pro tip guys, never sign a CLA.eekee wrote:Published on Youtube while I was typing all that:
Corporate Open Source is Dead
Re: Is OS development in threat of extinction?
Oddly enough I was just thinking on similar lines.
In an effort to keep my aging brain sharp and to stave the winter blahs I thought I would take a stab at writing a simple OS for X86. Nothing fancy, just something to waste time with.
I've been a programmer for going on 50 years now. I started out on mainframes back when it was possible for a single individual to actually understand pretty much everything there was to know about the system they were working on. I've since written 4 emulators for old Univac mainframes. I even wrote a simple operating system for one of them.
The thing that strikes me most about trying to write anything on bare metal for the x86 is the sheer complexity of it all. Coding for mainframes was simple by comparison. On mainframes much of the complexity of interfacing to peripherals was hidden in the hardware. To do I/O to any device, all you had to do was set up a couple of words in RAM, execute the start I/O instruction and sit back and wait for the interrupt. Once you got the interrupt the I/O was done, either with or without error and the data was sitting there in memory. No polling of devices, no checking of status registers, no acknowledging interrupts. Easy peasy. Granted you had to know what kind of disk drive you were talking to so that you knew how many cylinders/heads/records there were but other than that it was all the same.
On the other hand, writing a disk driver for ATA devices seems to be an exercise in frustration. PATA is easy enough, if inefficient, but get beyond that and it seems to be a lot of complexity just for the sake of complexity.
Back in the mainframe days, hardware was expensive and programmers were cheap. Now it's the other way around. One wonders why the complexity that programmers have to deal with today can't be hidden in the hardware, the way it used to be. If hardware standards were clearly and concisely written and openly published there should be no reason why hardware manufacturers couldn't create devices that just dropped into existing hardware chassis and worked with existing OSes without the need to have a driver for every different brand of disk drive or whatever.
Just thinking out loud.
In an effort to keep my aging brain sharp and to stave the winter blahs I thought I would take a stab at writing a simple OS for X86. Nothing fancy, just something to waste time with.
I've been a programmer for going on 50 years now. I started out on mainframes back when it was possible for a single individual to actually understand pretty much everything there was to know about the system they were working on. I've since written 4 emulators for old Univac mainframes. I even wrote a simple operating system for one of them.
The thing that strikes me most about trying to write anything on bare metal for the x86 is the sheer complexity of it all. Coding for mainframes was simple by comparison. On mainframes much of the complexity of interfacing to peripherals was hidden in the hardware. To do I/O to any device, all you had to do was set up a couple of words in RAM, execute the start I/O instruction and sit back and wait for the interrupt. Once you got the interrupt the I/O was done, either with or without error and the data was sitting there in memory. No polling of devices, no checking of status registers, no acknowledging interrupts. Easy peasy. Granted you had to know what kind of disk drive you were talking to so that you knew how many cylinders/heads/records there were but other than that it was all the same.
On the other hand, writing a disk driver for ATA devices seems to be an exercise in frustration. PATA is easy enough, if inefficient, but get beyond that and it seems to be a lot of complexity just for the sake of complexity.
Back in the mainframe days, hardware was expensive and programmers were cheap. Now it's the other way around. One wonders why the complexity that programmers have to deal with today can't be hidden in the hardware, the way it used to be. If hardware standards were clearly and concisely written and openly published there should be no reason why hardware manufacturers couldn't create devices that just dropped into existing hardware chassis and worked with existing OSes without the need to have a driver for every different brand of disk drive or whatever.
Just thinking out loud.
Re: Is OS development in threat of extinction?
Very nice! But the later PC BIOSes offer a very similar feature: set some registers and execute the right INT instructions, and when it returns, the data is sitting there in memory. It's a bit sad you can't do anything else while it's running, but that's probably because of the PC's origins as a cost-reduced system -- the code which the BIOS runs on a PC runs instead on an auxiliary processor on a mainframe. (Arguably, IBM tried too hard to keep the price down.)sboydlns wrote: ↑Wed Feb 19, 2025 9:19 amThe thing that strikes me most about trying to write anything on bare metal for the x86 is the sheer complexity of it all. Coding for mainframes was simple by comparison. On mainframes much of the complexity of interfacing to peripherals was hidden in the hardware. To do I/O to any device, all you had to do was set up a couple of words in RAM, execute the start I/O instruction and sit back and wait for the interrupt. Once you got the interrupt the I/O was done, either with or without error and the data was sitting there in memory. No polling of devices, no checking of status registers, no acknowledging interrupts. Easy peasy. Granted you had to know what kind of disk drive you were talking to so that you knew how many cylinders/heads/records there were but other than that it was all the same.
Atari's 8-bit home(!) computers had a nicer design which launched in 1979, but their universal serial bus meant every peripheral had to have its own microcontroller (except the cassette). They never did manage to get the costs of peripherals down. PCs didn't get hard drives with on-board microcontrollers for another 7 years. (Interesting that it wasn't IBM who developed them.)
I used to have all this stuff to say about simplicity and why complexity exists, but I can't seem to muster the motivation to write it up, today. It's a long rant anyway, and not very satisfying. The worst part is that sometimes the reasons for complexity are good ones.

The other day, I read about the first 2 Unix ports from people involved with them. The reason both teams had for the ports was that in those days, applications weren't too hard to write and hardware was easy to interface, (very easy, as you just explained,) but the operating systems in between were very difficult to interface to; very complex. There was no science behind this; they were complex for bad reasons. It was especially hard to port programs between OSs. Unix was a rare simple-but-useful OS, and so 2 teams, one at Bell Labs and one in Wollongong in Australia, separately started porting the operating system itself.
Trivia: When Richard Miller (of the Wollongong port) retired, the first thing he did was take Thompson and Ritchie's later OS, Plan 9, and port it to the Raspberry Pi. I love this little fact.

I just realised you can look at all of Plan 9 as a result of the same reasons which led to the Unix port. Portability was particularly desirable because large organizations wanted to run the same software at different locations where they had different computers. Plan 9 is designed to take this to an extreme. You have central file and authentication servers, the filesystem is organized with binaries for all the different architectures you might find in 90s workstations and servers, and users get their own environment no matter which workstation they log in on. And it's all very simply organized.
The flip side to Plan 9 is that writing a virtual file server is not simple. For an OS where everything is virtual files, this is a big flaw. I don't think that was intentional, but the obfuscation of the authentication system apparently was.
I have an idea for an OS which requires a multi-core PC, using 1 core only to interface with the BIOS so it's much like your mainframe hard disk. Knowing PC architecture however, it'll find some way to ruin my plans! XD I have mentioned it on these forums. Someone (probably Octocontrabass) told me what would go wrong but I can't remember what it was. Headache today.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: Is OS development in threat of extinction?
Commodore had a similar interface for the VIC20 and C64. And, yeah, the peripherals were expensive. The external floppy cost about the same as the system unit.Atari's 8-bit home(!) computers had a nicer design which launched in 1979, but their universal serial bus meant every peripheral had to have its own microcontroller (except the cassette). They never did manage to get the costs of peripherals down. PCs didn't get hard drives with on-board microcontrollers for another 7 years. (Interesting that it wasn't IBM who developed them.)
Over the years I usually have found that when the complexity of something is starting to run away it is because I made a bad decision somewhere along the way and it is worth going back and rethinking my assumptions. The other thing that I often found was that something designed by committee was often more complex that it needed to be because everyone had to add the own $0.02 to the design. Having said that, sometimes complexity is unavoidable.I used to have all this stuff to say about simplicity and why complexity exists, but I can't seem to muster the motivation to write it up, today. It's a long rant anyway, and not very satisfying. The worst part is that sometimes the reasons for complexity are good ones.The second-worst part is that sometimes they're bad reasons but they're gonna happen anyway.
When Unisys made the decision to stop making their own chips for their 2200 series mainframes and run everything in emulation on commodity hardware they took a somewhat similar approach. Each 2200 "mainframe" now consists of 3 separate servers. One running the OS on an emulated CPU, one handling I/O and one for the console and other support functions. It works well but apparently it took a team of engineers about a decade to make it work fast enough to be better than custom designed chips. That would have been an interesting project to work on.I have an idea for an OS which requires a multi-core PC, using 1 core only to interface with the BIOS so it's much like your mainframe hard disk. Knowing PC architecture however, it'll find some way to ruin my plans! XD I have mentioned it on these forums. Someone (probably Octocontrabass) told me what would go wrong but I can't remember what it was. Headache today.
I read an interesting article about that project once but I can't find it again.