Theoretical: Why the original 386 design was bad

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
bontanu
Member
Member
Posts: 134
Joined: Thu Aug 18, 2005 11:00 pm
Location: Sol. Earth. Europe. Romania. Bucuresti
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by bontanu »

I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad ;)
Ambition is a lame excuse for the ones not brave enough to be lazy; Solar_OS http://www.oby.ro/os/
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by JamesM »

bontanu wrote:I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad ;)
I think you're wrong ;)
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by JamesM »

AlexExtreme wrote:
JamesM wrote:
LDM/STM with user registers are
Given we're talking about user mode, do you mean LDM/STM with supervisor registers?

LDM/STM with user regs in user mode is kind of expected ;)
No, there's supervisor mode LDM/STM instructions that load from/store to specifically the user mode copies of registers that are marked as unpredictable when executed from user/system modes. Sorry, wasn't clear what I was referring to there :)
Ahh, the LDM r0, {r1,r2,r3}^ with the hat?
xyzzy
Member
Member
Posts: 391
Joined: Wed Jul 25, 2007 8:45 am
Libera.chat IRC: aejsmith
Location: London, UK
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by xyzzy »

JamesM wrote:Ahh, the LDM r0, {r1,r2,r3}^ with the hat?
That's the one.
TylerH
Member
Member
Posts: 285
Joined: Tue Apr 13, 2010 8:00 pm
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by TylerH »

tom wrote:Well it is a bit sad Intel couldn't foresee that future processors would use more than 1 megabyte of RAM. Moore's law was first declared in 1965 apparently, so Intel should have realised memory requirements would also significantly increase in future.
They knew memory requirements would increase. They just didn't know they'd be maintaining x86 compatibility 30 years later.
tom9876543
Member
Member
Posts: 170
Joined: Wed Jul 18, 2007 5:51 am

Re: Theoretical: Why the original 386 design was bad

Post by tom9876543 »

So in summary
===========

Intel could have designed a "pure" 386 by eliminating 286 backwards compatibility (making the 386 only compatible with the 8086).

Intel could have got rid of all 286 protections. Paging supports User / Supervisor levels. The 286 had an IO Permission Bitmap, but I will suggest its not required if IN/OUT is restricted to Supervisor Level code.

Getting rid of 286 compatibility would not have been the end of the world for operating system developers, because 286 applications could still run in their own address space under v8086 mode.

Benefits of getting rid of 286 compatibility:
- I counted 18 single byte opcodes that are rarely used and basically wasted. They should have been reallocated to more useful opcodes (in protected mode).
- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
- How many cpu transistors are wasted to support GDT, TSS, Call Gates, Task Gates etc? Although I admit today 10,000 transistors would be barely noticed.
- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
- Would have been a chance to eliminate the a20 hack, which was caused by a badly designed 286.

Anyway its all theoretical now.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by Owen »

tom9876543 wrote:Intel could have got rid of all 286 protections. Paging supports User / Supervisor levels. The 286 had an IO Permission Bitmap, but I will suggest its not required if IN/OUT is restricted to Supervisor Level code.
As has been established... 286 backwards compatibility was a design requirement
Getting rid of 286 compatibility would not have been the end of the world for operating system developers, because 286 applications could still run in their own address space under v8086 mode.
No they couldn't. 286 apps depend upon the specifics of the 286...
Benefits of getting rid of 286 compatibility:
- I counted 18 single byte opcodes that are rarely used and basically wasted. They should have been reallocated to more useful opcodes (in protected mode).
Won't argue there. Except that quite a lot of them have been killed (Long mode) and a few of the others have been repurposed in other instructions. For example, see the operand size prefix's use in MMX/SSE.

Of course, killing the single byte inc/decs was a much better coop for AMD.
- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
None. Limit checks are cheap and fast, and use little core area. SYSENTER/EXIT/CALL/RET were created because loading GDT entries is a waste of time.
- How many cpu transistors are wasted to support GDT, TSS, Call Gates, Task Gates etc? Although I admit today 10,000 transistors would be barely noticed.
Next to none. Its mostly a tiny section of microcode ROM. On CPUs where half of their area and more is used for cache, it is truly irrelevant and below the noise floor
- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
Whoo! We save one whole kilobyte of RAM!. Major benefit! :roll:
- Would have been a chance to eliminate the a20 hack, which was caused by a badly designed 286.

Anyway its all theoretical now.
A20 wasn't important to Intel at the time. It is even less so now. It is trivial.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by Combuster »

- How many CPU cycles are wasted doing segment limit checks etc? Ironically, Intel invented SYSTENTER/SYSEXIT because these 286 checks are a total waste of time.
Segmentation is still used in relatively modern systems, small address spaces and 386-compatible no-execute. The fast traps are just for the common case (read: microsoft) where the system developer can guarantee that the checks and memory accesses truly are a waste of time.
- The Interrupt Table could have been 4 bytes per interrupt, unlike the convoluted 286 inspired 8 byte design.
286s used a 8-byte IDT as well, and that was not because of the address size - if your system call interface is based on interrupts, you will need to tell which interrupts may be called and which shouldn't.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Theoretical: Why the original 386 design was bad

Post by OSwhatever »

bontanu wrote:I think that x86 and 80386 design is very good (can be improved) and that ARM / RISC design is very bad ;)
ARM is not really a RISC design. Maybe from the beginning their design goal was to make a RISC CPU but it has with thumb evolved to something that looks like a more modern CPU. RISC design isn't really a modern design goal anymore and current trend is variable length instructions again and often merged instructions. If you look at AVR32 and Xtensa, you clearly see modern design design decisions based on the current knowledge in CPU architecture.

ARM is almost the new x86, especially now when LPAE is a copy of x86 PAE.
tom9876543
Member
Member
Posts: 170
Joined: Wed Jul 18, 2007 5:51 am

Re: Theoretical: Why the original 386 design was bad

Post by tom9876543 »

Owen wrote:No they couldn't. 286 apps depend upon the specifics of the 286...
You are full of ****.

Go back to post where I asked:

Can you please provide exact details of what 286 features a 16 bit protected mode OS/2 application used? Also same for win16 application?

Still waiting for an answer.....
tom9876543
Member
Member
Posts: 170
Joined: Wed Jul 18, 2007 5:51 am

Re: Theoretical: Why the original 386 design was bad

Post by tom9876543 »

Combuster wrote:286s used a 8-byte IDT as well, and that was not because of the address size - if your system call interface is based on interrupts, you will need to tell which interrupts may be called and which shouldn't.
Yes you do need a way to tell which interrupts can be called by User level code. Thats 1 bit. If the CPU requires ISR routines to be aligned to 16 byte boundaries, there are 4 spare bits in a 4 byte IDT entry.
Last edited by tom9876543 on Wed Jan 26, 2011 2:17 pm, edited 1 time in total.
tom9876543
Member
Member
Posts: 170
Joined: Wed Jul 18, 2007 5:51 am

Re: Theoretical: Why the original 386 design was bad

Post by tom9876543 »

Owen wrote: Whoo! We save one whole kilobyte of RAM!. Major benefit! :roll:
Yes you can laugh about RAM, but then you forgot about CPU Cache

Whats the size of CPU cache? Especially on older CPUs like 486 or Pentium? 32KB? 1KB makes a noticeable difference there.


Overall, I agree the only significant benefit of dropping 286 compatibility would be to allow 18 single byte opcodes to be reallocated to more useful instructions.
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Re: Theoretical: Why the original 386 design was bad

Post by JamesM »

Overall, I agree the only significant benefit of dropping 286 compatibility would be to allow 18 single byte opcodes to be reallocated to more useful instructions.
Given that the instruction set is microcoded down to an internal RISC representation, I don't see the benefit of this. It would cause code to be a tiny amount smaller - not exactly a big issue.
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Theoretical: Why the original 386 design was bad

Post by rdos »

IMO, the 386 processor design is one of the best processor designs ever done. The whole protection model is superior to the current "flat" memory models only protected by paging. Keeping compability with 286 protection models and real mode (V86) was done so well that it actually did not break DOS or 16-bit protected mode applications.

In retrospect, they should have done this differently:
* Selectors should have been extended to 32 bits.
* Maybe the TSS feature should have been done differently (or perhaps omitted)

In comparison to the really bad design of the 64-bit extension for x86, the 386 design was a major technological feat. OTOH, Intel was the first to totally break the x86 architecture with Itanium, but AMD too broke the whole design with their extension. It is easy to change AMDs spec so that it does not break backwards compability with protected mode, and this would not have affected performance, yet AMD chose to break the x86 architecture.
rdos
Member
Member
Posts: 3297
Joined: Wed Oct 01, 2008 1:55 pm

Re: Theoretical: Why the original 386 design was bad

Post by rdos »

tom9876543 wrote:Can you please provide exact details of what 286 features a 16 bit protected mode OS/2 application used? Also same for win16 application?
That is an easy one. The 286 processor reinterpreted the use of segment registers from real mode. In real mode, addresses are calculated by shifting the segment four bits left and adding the offset. In 286 protected mode, the segment register (called a selector) loads a 24-bit base (and 16-bit limit) from the GDT/LDT, and adds this base to the offset to form the physical address. Therefore, 286 protected mode applications can address 16MB physical memory, and thus cannot execute in real mode which only can address 1MB. The addressing scheme is completely incompatible. Some 16 bit protected mode applications also define "huge segments" by allocating several adjacent selectors, where all but the last have a 64k limit. Many 16-bit protected mode applications also depend on being able to allocate and manipulate selectors.
Locked