Page 1 of 1
Scalability
Posted: Tue May 14, 2013 5:38 am
by dozniak
Re: Scalability
Posted: Tue May 14, 2013 6:41 am
by bluemoon
For general usage, where connections are independent (ie. normal web servers), or doing its logic in database server clusters, it would be cheaper and easier to just double the metal, this is why there are load balancer.
With 10m connection stuck within one single machine, the cost of failure (including hardware failure) is exponential.
As a side note the article make a bad example - No one care much on delivering static content, and it's dynamic content that counts.
With PHP, JSP, ASP etc it's just don't make sense for 10m TPS doing useful work and one DB trip, even with pre-compiled pages, code cache and db cache.
The bottleneck exists elsewhere too.
However, for extreme situation, I agree there is need to push the limit.
Re: Scalability
Posted: Tue May 14, 2013 7:41 am
by Mikemk
Interesting concepts. I do disagree however with some of them.
Firstly, he suggested making a userspace application on top of unix which handles the connections itself rather than relying on unix. IMHO, it would be far more efficient to just implement it as the kernel, and not implement general purpose features such as multitasking. Granted, some basic input and output would be necessary.
Secondly, why do the connections have to be to the same machine? What about like this
Code: Select all
Client <----------------------\
\/ |
DNS Server |
\/ |
Frontend machine* |
\/ \/ \/ |
Control 1, Control 2, etc. ---/
\/\ /\/ /\/
High speed data storage (SSD anyone?)
*Basically another DNS server, which changes it's address based on control machine availability
Better software would of course improve this. But hardware is also important.
Re: Scalability
Posted: Tue May 14, 2013 8:00 am
by dozniak
m12 wrote:Firstly, he suggested making a userspace application on top of unix which handles the connections itself rather than relying on unix.
My thoughts exactly - getting rid of unix kernel would solve the problem (since unix kernel IS the problem here). It would also remove layers of kernel-usermode switching and virtual space mapping to just deliver some data.
Re: Scalability
Posted: Tue May 14, 2013 8:56 am
by bluemoon
This is going seriously wrong.
Any sane OS will not do context switch upon every IRQ or packet, it will surprise me if unix do.
It's more practical to queue thousand of events, and fetch by application at once in its own time slice, so no, context switch is no bottleneck here.
Furthermore, nobody is using the original unix kernel now, it has been well redesigned a few times, this includes linux too, except for "well design".
Re: Scalability
Posted: Tue May 14, 2013 9:41 am
by Mikemk
@bluemoon, I interpreted the article as unix-clone, which includes linux.