Re: Theoretical: Why the original 386 design was bad
Posted: Mon Jan 24, 2011 6:27 pm
I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad
The Place to Start for Operating System Developers
http://forum.osdev.org./
I think you're wrongbontanu wrote:I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad
Ahh, the LDM r0, {r1,r2,r3}^ with the hat?AlexExtreme wrote:No, there's supervisor mode LDM/STM instructions that load from/store to specifically the user mode copies of registers that are marked as unpredictable when executed from user/system modes. Sorry, wasn't clear what I was referring to thereJamesM wrote:Given we're talking about user mode, do you mean LDM/STM with supervisor registers?LDM/STM with user registers are
LDM/STM with user regs in user mode is kind of expected
That's the one.JamesM wrote:Ahh, the LDM r0, {r1,r2,r3}^ with the hat?
They knew memory requirements would increase. They just didn't know they'd be maintaining x86 compatibility 30 years later.tom wrote:Well it is a bit sad Intel couldn't foresee that future processors would use more than 1 megabyte of RAM. Moore's law was first declared in 1965 apparently, so Intel should have realised memory requirements would also significantly increase in future.
As has been established... 286 backwards compatibility was a design requirementtom9876543 wrote:Intel could have got rid of all 286 protections. Paging supports User / Supervisor levels. The 286 had an IO Permission Bitmap, but I will suggest its not required if IN/OUT is restricted to Supervisor Level code.
No they couldn't. 286 apps depend upon the specifics of the 286...Getting rid of 286 compatibility would not have been the end of the world for operating system developers, because 286 applications could still run in their own address space under v8086 mode.
Won't argue there. Except that quite a lot of them have been killed (Long mode) and a few of the others have been repurposed in other instructions. For example, see the operand size prefix's use in MMX/SSE.Benefits of getting rid of 286 compatibility:
- I counted 18 single byte opcodes that are rarely used and basically wasted. They should have been reallocated to more useful opcodes (in protected mode).
None. Limit checks are cheap and fast, and use little core area. SYSENTER/EXIT/CALL/RET were created because loading GDT entries is a waste of time.- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
Next to none. Its mostly a tiny section of microcode ROM. On CPUs where half of their area and more is used for cache, it is truly irrelevant and below the noise floor- How many cpu transistors are wasted to support GDT, TSS, Call Gates, Task Gates etc? Although I admit today 10,000 transistors would be barely noticed.
Whoo! We save one whole kilobyte of RAM!. Major benefit!- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
A20 wasn't important to Intel at the time. It is even less so now. It is trivial.- Would have been a chance to eliminate the a20 hack, which was caused by a badly designed 286.
Anyway its all theoretical now.
Segmentation is still used in relatively modern systems, small address spaces and 386-compatible no-execute. The fast traps are just for the common case (read: microsoft) where the system developer can guarantee that the checks and memory accesses truly are a waste of time.- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
286s used a 8-byte IDT as well, and that was not because of the address size - if your system call interface is based on interrupts, you will need to tell which interrupts may be called and which shouldn't.- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
ARM is not really a RISC design. Maybe from the beginning their design goal was to make a RISC CPU but it has with thumb evolved to something that looks like a more modern CPU. RISC design isn't really a modern design goal anymore and current trend is variable length instructions again and often merged instructions. If you look at AVR32 and Xtensa, you clearly see modern design design decisions based on the current knowledge in CPU architecture.bontanu wrote:I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad
You are full of ****.Owen wrote:No they couldn't. 286 apps depend upon the specifics of the 286...
Yes you do need a way to tell which interrupts can be called by User level code. Thats 1 bit. If the CPU requires ISR routines to be aligned to 16 byte boundaries, there are 4 spare bits in a 4 byte IDT entry.Combuster wrote:286s used a 8-byte IDT as well, and that was not because of the address size - if your system call interface is based on interrupts, you will need to tell which interrupts may be called and which shouldn't.
Yes you can laugh about RAM, but then you forgot about CPU CacheOwen wrote: Whoo! We save one whole kilobyte of RAM!. Major benefit!
Given that the instruction set is microcoded down to an internal RISC representation, I don't see the benefit of this. It would cause code to be a tiny amount smaller - not exactly a big issue.Overall, I agree the only significant benefit of dropping 286 compatibility would be to allow 18 single byte opcodes to be reallocated to more useful instructions.
That is an easy one. The 286 processor reinterpreted the use of segment registers from real mode. In real mode, addresses are calculated by shifting the segment four bits left and adding the offset. In 286 protected mode, the segment register (called a selector) loads a 24-bit base (and 16-bit limit) from the GDT/LDT, and adds this base to the offset to form the physical address. Therefore, 286 protected mode applications can address 16MB physical memory, and thus cannot execute in real mode which only can address 1MB. The addressing scheme is completely incompatible. Some 16 bit protected mode applications also define "huge segments" by allocating several adjacent selectors, where all but the last have a 64k limit. Many 16-bit protected mode applications also depend on being able to allocate and manipulate selectors.tom9876543 wrote:Can you please provide exact details of what 286 features a 16 bit protected mode OS/2 application used? Also same for win16 application?