Page 1 of 2
How do you think CPUs will keep improving & getting faster?
Posted: Tue Oct 07, 2014 2:39 pm
by LieutenantHacker
You think we've hit the limits? Intel isn't doing anything about clock speed anymore, and they keep trying to shrink their chips. No more cores are expected and pretty much everything seems to be crashing into the wall (no room for succession).
I believe it's very possible for CPUs to be even more faster than they are today and efficiently designed, but know that most likely the manufacturers won't be able to stretch it anymore due to costs and convenience.
I fear that with the limits we've been stuck in with CPU speeds over the last few years, we won't have the general-purpose computing power easily-attainable for massive-scale programs that no single CPU's of today can handle. A lot of people are going down the GPGPU road because it seems to be the only way for some massive-scale programs to work in these machines. Some massive programs are a work in progress,
like this Playstation 3 emulator, which is believed to never run on current processors when improved without some serious work (GPGPU, for one).
How and when will we see better processors? How do you think they'll be able to get faster, improve microarchitecture, keep cooling optimum, and still remain conveniently accessible/useable?
Re: How do you think CPUs will keep improving & getting fast
Posted: Tue Oct 07, 2014 2:56 pm
by seuti
The AMD solution is to add more cores.
Re: How do you think CPUs will keep improving & getting fast
Posted: Tue Oct 07, 2014 3:34 pm
by max
I actually have no idea about electronics, but I always wondered why they would want to make their processors smaller and smaller; why do they not use the space they saved/build bigger processors to add more cool circuit stuff and whatever? Can someone explain this?
Re: How do you think CPUs will keep improving & getting fast
Posted: Tue Oct 07, 2014 11:04 pm
by thepowersgang
Temperature is one of the largest current limits (that and the HUGE size of caches on the die)
I saw an article a few months back that described using the techniques of old vacuum tubes applied to micro-scale electronics, which would allow faster clock speeds with less propagation delay (the mini-vacuum tubes don't have as much capacitance as FETs do). They don't even have to be in a vacuum, as the gate distance is small enough that there is next to no chance of a collision occurring)
(A link on the idea, not sure this is what I originally read -
http://www.extremetech.com/extreme/1850 ... licon-fets)
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 12:51 am
by Combuster
I still think that
different architectures and teaching people to use parallelism properly is key to improvements.
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 1:03 am
by iocoder
max wrote:I actually have no idea about electronics, but I always wondered why they would want to make their processors smaller and smaller; why do they not use the space they saved/build bigger processors to add more cool circuit stuff and whatever? Can someone explain this?
AFAIK when you make the chip smaller, you can make more integrated circuits out of a single silicon wafer. Consequently, the cost reduces very highly. Another reason is that small size means small delays; if the chip size is great, then signals will consume a lot of time to travel from one place on the chip to another place...
Please anyone correct me if I am wrong.
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 2:04 am
by Owen
max wrote:I actually have no idea about electronics, but I always wondered why they would want to make their processors smaller and smaller; why do they not use the space they saved/build bigger processors to add more cool circuit stuff and whatever? Can someone explain this?
Smaller means: Everything is closer together, so less light speed delays. FET gates are smaller, so lower capacitance. You can normally reduce the voltage somewhat, so lower power. The first two make your design faster; the latter means you have more power available to do stuff with.
It also means more leakage (an effect which really took hold around 130-90nm), which basically means "quantum effects apply now" and electrons occasionally tunnel through transistors. This means that everything is switched on a little all the time, and so your chip is wasting power even when it isn't doing anything.
The smaller you go, the more leakage affects you (which increases your static power draw), and therefore the larger proportion of total power it becomes. This is why things have moved from clock gating (where you just turn off the clock to a portion of the design) to power gating.
They do quite often use the extra area a die shrink gives you to add more features. If you look at Intel's tick/tock strategy for example, on a tick they shrink the previous core, and then the tock is an improved core which normally uses (some of) the extra space, as well as making design imrpovements.
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 6:29 am
by Brendan
Hi,
LieutenantHacker wrote:You think we've hit the limits?
No (yes).
What's happened is that, for single-thread performance, we've picked all the low hanging fruit. There's still some fruit left to pick, but it's very hard to reach. CPU manufacturers will keep trying, and will keep improving single-thread performance, but the improvements are going to be small (e.g. 5% faster than the previous generation) and nowhere near what we saw last century - e.g. performance differences from 80386 (4 MIPS) to 80486 (11 MIPS) to Pentium (188 MIPS) to Pentium III (3000 MIPS).
The "simple" solution is adding more cores. This is probably the hardest "simple" solution that's ever existed; because the traditional procedural programming model doesn't lend itself to scalable solutions (locks are hard to get right). Getting people to shift to a different programming model (e.g. the
actor model) that has
proven scalability advantages is the most promising way forward.
However, given that (e.g.) one of the fastest growing programming languages (python)
still doesn't even have any concurrency at all (even though we've had multi-core for over 10 years now), getting people to shift to a whole new programming model is going to be a massive challenge. What I think will happen is that most programmers will continue to suck, and as the number of cores increases most programmers will just suck more. It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
Cheers,
Brendan
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 7:14 am
by embryo
Brendan wrote:It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
Or there will be compilers, that can do the most of a programmer's job.
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 8:00 am
by Owen
Brendan wrote:However, given that (e.g.) one of the fastest growing programming languages (python)
still doesn't even have any concurrency at all (even though we've had multi-core for over 10 years now), getting people to shift to a whole new programming model is going to be a massive challenge. What I think will happen is that most programmers will continue to suck, and as the number of cores increases most programmers will just suck more. It's going to take another 20 years before "average" programmers are able to write software capable of handling the hardware we have now.
Cheers,
Brendan
For a lot of the uses of Python, the concurrency approach is to run multiple instances of the program. This isn't always possible, but for a wide variety of apps it's applicable (especially, as Brendan will delight to think about, those which scale best)
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 2:57 pm
by AndrewAPrice
You can find 6 and 8 core processors for reasonable costs now. I think these are going to continue to grow, and it won't be long until we have 16-core desktop processors.
What would the programming model be like?
I'd like to see some kind of asynchronous parallel event model.
For example, node.js is event-based, but it's single threaded and non-blocking, and that removes a lot of OS overhead.
Here's an example of this programming model:
Code: Select all
var write_log = function(msg) {
file.open("log.txt", function(success, handle) {
if(!success) return;
file.write(handle, msg + "\n", function(success) {
file.close(handle, null);
});
});
}
Node.js is inherently single threaded. When the main body finishes, it returns to an event loop that waits for the next event. What about an OS that supports light weight, short lived threads - where when events were are triggered a thread is spawned immediately, running the handler in parallel?
Spawning a new thread could be as simple as:
Code: Select all
spawn(function() {
// I'm in another thread!
});
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 3:27 pm
by SpyderTL
MessiahAndrw wrote:Spawning a new thread could be as simple as:
Code: Select all
spawn(function() {
// I'm in another thread!
});
How would you pass in parameters?
Do you have access to local variables in the parent scope?
Code: Select all
int i = 0;
spawn(function() {
i++;
});
Who is responsible for synchronizing access to these variables?
Re: How do you think CPUs will keep improving & getting fast
Posted: Wed Oct 08, 2014 6:02 pm
by AndrewAPrice
SpyderTL wrote:MessiahAndrw wrote:Spawning a new thread could be as simple as:
Code: Select all
spawn(function() {
// I'm in another thread!
});
How would you pass in parameters?
Do you have access to local variables in the parent scope?
Yes. Internally you could pass a function pointer with a pointer to the closure block, and a high level language compiler can work out the details. In C, that would translate as a void* pointer.
Code: Select all
int i = 0;
spawn(function() {
i++;
});
Who is responsible for synchronizing access to these variables?
In the model in my head it'd be the programmer and/or compiler.
In an actor-based model, you could lock the actor when an event on it is triggered, essentially making it synchronous.
It's not a perfect model, just something I threw out there.
Re: How do you think CPUs will keep improving & getting fast
Posted: Fri Oct 10, 2014 2:14 pm
by onlyonemac
I'm still waiting for 128-bit CPUs to come out (actually not personally because that would cause too many compatibility problems but I still don't know what's happened to that...) and also for people to actually start getting the true performance of their CPUs rather than just using all the extra power up with all the bloat that comes with modern operating systems. My mother's new Windows 7 computer (2GB RAM, 2.2GHz Dual-Core CPU) performs only marginally faster than her old XP one (1GB RAM, 2.2GHz Single-Core CPU) which had specifications exactly half of her new one. (However on the other hand my new Linux laptop (4GB RAM, 2GHz Dual-Core CPU) by far outperforms my old Linux desktop (256MB RAM, 1GHz Single-Core CPU) which was, to be fair, rather outdated and the laptop's specs are exactly 8 times higher.)
Re: How do you think CPUs will keep improving & getting fast
Posted: Sat Oct 11, 2014 9:11 pm
by AndrewAPrice
What would you use a 128-bit CPU for? It might take some time to reach the capacity of 64-bit address spaces, and while I can see the benefits to 128-bit registers (calculating galactic distances down to the millimetre in space simulators, etc.) this could be something better handled by a 128-bit ALU extension.