Objects, rings/privileges, modules, IPC, etc.
Posted: Wed Jan 02, 2008 1:57 pm
Hi all, newbie here!
I've just started some hobby OS development, and have always had a keen interest in how the low-level stuff works. So it's nice to see there's a wealth of information now available as to how to create a basic kernel, and the tools (in my case, gcc/g++, GRUB, vmware...) exist to create and boot a kernel with less pain than used to be involved.
So, I've got my "Hello World" kernel, it's happily loading at 0xC0000000 with paging enabled, a simple GDT loaded (0 = null, 1 = code, 2 = data. Both the last two span 4 GB), the IDT set up and filled with 32 ISRs and able to handle interrupts...
What now?
Well, I've re-written what I have so far in a few different ways, finally settling on C++ with some ASM to bring it all to life. I'm keen on object-oriented coding so am aiming to make everything object-oriented.
The more trivial objects are intended to be compiled inline, thus stored in headers. This includes things like "Port", which is a simple wrapper around port I/O inline ASM calls, that takes a port number in its constructor. So it ought to be fairly speedy (no slower than calling a C function from another module).
This idea can also be extended to the PICs, which could be wrapped in a class that takes the base address of the PIC and allows you to program it. Of course, there'd be 2 of these - to cater for the master and slave.
(Though I gave up trying to make GDT, IDT and even the CPU and memory as objects... They work better as namespaces!!).
Anyway, what I'm curious about mainly at this stage is what the different CPU rings/privilege levels can be used for? My understanding is you can have the most critical code in ring 0, and then for rings 1 and 2 have things like drivers and GUI, and then in ring 3 standard apps.
A lot of operating systems seem to be designed to be portable - presumably by creating a kernel tailored to a particular architecture, you a) can take advantage of non-portable instructions and features and b) don't have to write so many drivers ?
Also I'm fairly interested in this microkernel stuff, although I hear that these do not offer as great performance as a monolithic kernel. What's the situation with this?
My main reason for liking the idea of microkernels is the modular aspect. Though my understanding so far is you need some kind of mechanism for communicating between the modules. Could this be implemented as a set of "stubs"? Whereby an application calls on another module, and the request then gets automatically piped through to the appropriate place and called as if the app was directly calling it...
Or, I guess each module could be treated as an object, and have instances created... In the case of streams, each instance might have some kind of identifier (network socket, file descriptor, etc.) associated with it, some operations/actions (receive/send) and some memory shared between the "driver" and the application.
I'm guessing that sharing memory between 2 processes would involve some tweaking of the GDT? I don't know... Enlighten me
As for process scheduling... Is it possible/wise to adjust the PIT so that the tick-rate adjusts based on the priority of a process? Obviously I'd need to compensate for this for any kind of internal counters that are based on time, and it may be a stupid idea... Alternatively, can real-time tick rate adjustment be done?
Finally, what sort of issues need I be aware of in the case of SMP and APIC? My understanding is that recent systems have something like 32 IRQs... And would there be interrupts occurring all over the place on both processors?
Forgive me for being totally clueless with regard to the possibilities, I'm just curious as to what can be achieved...
I've just started some hobby OS development, and have always had a keen interest in how the low-level stuff works. So it's nice to see there's a wealth of information now available as to how to create a basic kernel, and the tools (in my case, gcc/g++, GRUB, vmware...) exist to create and boot a kernel with less pain than used to be involved.
So, I've got my "Hello World" kernel, it's happily loading at 0xC0000000 with paging enabled, a simple GDT loaded (0 = null, 1 = code, 2 = data. Both the last two span 4 GB), the IDT set up and filled with 32 ISRs and able to handle interrupts...
What now?
Well, I've re-written what I have so far in a few different ways, finally settling on C++ with some ASM to bring it all to life. I'm keen on object-oriented coding so am aiming to make everything object-oriented.
The more trivial objects are intended to be compiled inline, thus stored in headers. This includes things like "Port", which is a simple wrapper around port I/O inline ASM calls, that takes a port number in its constructor. So it ought to be fairly speedy (no slower than calling a C function from another module).
This idea can also be extended to the PICs, which could be wrapped in a class that takes the base address of the PIC and allows you to program it. Of course, there'd be 2 of these - to cater for the master and slave.
(Though I gave up trying to make GDT, IDT and even the CPU and memory as objects... They work better as namespaces!!).
Anyway, what I'm curious about mainly at this stage is what the different CPU rings/privilege levels can be used for? My understanding is you can have the most critical code in ring 0, and then for rings 1 and 2 have things like drivers and GUI, and then in ring 3 standard apps.
A lot of operating systems seem to be designed to be portable - presumably by creating a kernel tailored to a particular architecture, you a) can take advantage of non-portable instructions and features and b) don't have to write so many drivers ?
Also I'm fairly interested in this microkernel stuff, although I hear that these do not offer as great performance as a monolithic kernel. What's the situation with this?
My main reason for liking the idea of microkernels is the modular aspect. Though my understanding so far is you need some kind of mechanism for communicating between the modules. Could this be implemented as a set of "stubs"? Whereby an application calls on another module, and the request then gets automatically piped through to the appropriate place and called as if the app was directly calling it...
Or, I guess each module could be treated as an object, and have instances created... In the case of streams, each instance might have some kind of identifier (network socket, file descriptor, etc.) associated with it, some operations/actions (receive/send) and some memory shared between the "driver" and the application.
I'm guessing that sharing memory between 2 processes would involve some tweaking of the GDT? I don't know... Enlighten me
As for process scheduling... Is it possible/wise to adjust the PIT so that the tick-rate adjusts based on the priority of a process? Obviously I'd need to compensate for this for any kind of internal counters that are based on time, and it may be a stupid idea... Alternatively, can real-time tick rate adjustment be done?
Finally, what sort of issues need I be aware of in the case of SMP and APIC? My understanding is that recent systems have something like 32 IRQs... And would there be interrupts occurring all over the place on both processors?
Forgive me for being totally clueless with regard to the possibilities, I'm just curious as to what can be achieved...