Hi,
Ready4Dis wrote:Installing an OS is no problem at all, even from a CD. Since the bootable image is treated as a floppy disk (well, similar), you have access to any files on that drive. From there you simply load whatever other drivers you want, i don't see how that is any different than how loading it another way would go. Generic booting is just as easy this way, so I don't see the argument there.
You're right - I forgot this would only be used while you're still in real mode (and you'd have real mode file system code during boot and seperate protected mode file system code used after boot).
Ready4Dis wrote:I see no point in having more than 1 kernel, my kernel doesn't do much really, so design differences are key here I think. It wouldn't take much at all for my boot loader to be able to load more than 1 kernel, just store a list of kernel locations, and load whichever is deemed 'best'. Pretty basic. However, as said, my kernel is pretty minimal, my user apps and drivers do most of the work.
For me, the boot code detects some things (CPU features, physical memory, NUMA domains, etc) and then decides what is needed. For example, it can check if all CPUs support long mode and then use the 64-bit linear memory manager. If long mode isn't supported it can then check if 36-bit addressing is needed (and if PAE is supported) and then use the "36-bit" linear memory manager. Otherwise it'd use the 32-bit "plain paging" linear memory manager.
My previous OS had 6 kernels (single-CPU, SMP and NUMA; with and without PAE), but that was when the OS was "32-bit only". I've split the kernel into modules now (several linear memory managers, several schedulers, etc).
The idea is to automatically select the best kernel (or the best kernel modules) without the user doing anything. For a generic OS installation CD all kernels (or kernel modules) would be present for the boot code to choose from, but when the OS is installed you'd only install the kernel (or kernel modules) that are needed for the specific computer.
Ready4Dis wrote:I don't even see the point of putting the GUI as part of your image, what if I have a specific driver for my video card, you're telling me it's better to add that to the image and load that before my fs/io drivers? That just doesn't make much sense, read it off the disk, so when I want to do updates, it's just a matter of replacing a few files rather than re-packing a kernel image. Lets be honest, the speed difference between you loading a generic vesa driver from your image, then the fs and i/o drivers, then the specific video card drivers is going to slow down your boot process rather than speed it up as claimed. What good is a GUI if you haven't loaded anything from disk yet to display, unless you're loading a humungoid OS image from the disk which has all the images, icons, text, backgrounds, etc built in.
The GUI was only an example - I was thinking more of getting the GUI's log-in prompt up, so the user can start typing in their username and password without waiting for disk drives to become ready (which takes Linux more than 10 seconds on the computer I'm using).
In a more general sense, the idea is to do things during boot in parallel as much as possible, instead of doing nothing while waiting for the disk drivers, then doing nothing while waiting for file system checks, then doing nothing while waiting for the keyboard driver, then doing nothing while waiting for the network cards, then doing nothing while waiting for DHCP, etc. Part of this is not needing to wait for disk drives and file systems to start before anything else can start. However, even for "wait then wait then wait" style serial boot it's probably not a bad idea to load everything you need from contiguous sectors to minimize the number of disk seeks (including reading directory information from disk).
I should also point out that there's differences between different OSs. For some OSs (during boot) there's a large number of small delays and a small number of large delays. For other OS's the hardware initialization is "half-assed" and a lot of these delays are avoided.
For a simple example, some OSs will detect if a PS/2 controller is present, then test the PS/2 controller, then configure the PS/2 controller, then detect which PS/2 devices are present, then test the PS/2 devices, then configure the PS/2 devices, then start using the PS/2 devices. Other OSs will assume a PS/2 controller is present, then assume the PS/2 controller is working correctly, then assume the PS/2 controller is configured, then assume the first PS/2 device is a keyboard, then assume the keyboard works correctly, then assume the keyboard is configured, then start using the keyboard. When something like the keyboard's BAT (Basic Assurance Test) is meant to take between 500 ms and 750 ms it's fairly obvious that the "half-assed" code will be much much faster (no hardware delays at all), but it's also very fragile and inflexible.
Let's assume the OS's code isn't "half-assed", and it takes 10 seconds to initialise the SCSI controller and drives, 4 seconds to setup networking (ethernet device driver, DHCP, etc), 3 seconds for the video driver to check for faulty display memory, and 1 second to initialize the keyboard. Would you wait 10 seconds for the drives, then 4 seconds for networking, then 3 seconds testing for faulty display memory, then 1 second waiting for PS/2 devices; so that it takes 18 seconds before the user can do anything? Alternatively, would you start the SCSI, network, keyboard and video drivers at the same time so that the user can start logging in after 3 seconds (instead of 18 seconds) and so that everything is ready after 10 seconds (instead of 18 seconds)? What if the OS has a distributed file system (and redundancy) and can access files on a remote computer and doesn't need to wait for the disk drives, so that everything is ready after 4 seconds (instead of 18 seconds)?
Cheers,
Brendan