Scalability

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
Post Reply
User avatar
dozniak
Member
Member
Posts: 723
Joined: Thu Jul 12, 2012 7:29 am
Location: Tallinn, Estonia

Scalability

Post by dozniak »

Learn to read.
User avatar
bluemoon
Member
Member
Posts: 1761
Joined: Wed Dec 01, 2010 3:41 am
Location: Hong Kong

Re: Scalability

Post by bluemoon »

For general usage, where connections are independent (ie. normal web servers), or doing its logic in database server clusters, it would be cheaper and easier to just double the metal, this is why there are load balancer.

With 10m connection stuck within one single machine, the cost of failure (including hardware failure) is exponential.
As a side note the article make a bad example - No one care much on delivering static content, and it's dynamic content that counts.
With PHP, JSP, ASP etc it's just don't make sense for 10m TPS doing useful work and one DB trip, even with pre-compiled pages, code cache and db cache.
The bottleneck exists elsewhere too.

However, for extreme situation, I agree there is need to push the limit.
Mikemk
Member
Member
Posts: 409
Joined: Sat Oct 22, 2011 12:27 pm

Re: Scalability

Post by Mikemk »

Interesting concepts. I do disagree however with some of them.
Firstly, he suggested making a userspace application on top of unix which handles the connections itself rather than relying on unix. IMHO, it would be far more efficient to just implement it as the kernel, and not implement general purpose features such as multitasking. Granted, some basic input and output would be necessary.
Secondly, why do the connections have to be to the same machine? What about like this

Code: Select all

Client <----------------------\
 \/                            |
DNS Server                    |
      \/                      |
Frontend machine*             |
  \/     \/    \/             |
Control 1, Control 2, etc. ---/
    \/\       /\/     /\/
High speed data storage (SSD anyone?)
*Basically another DNS server, which changes it's address based on control machine availability
Better software would of course improve this. But hardware is also important.
Last edited by Mikemk on Tue May 14, 2013 9:45 am, edited 1 time in total.
Programming is 80% Math, 20% Grammar, and 10% Creativity <--- Do not make fun of my joke!
If you're new, check this out.
User avatar
dozniak
Member
Member
Posts: 723
Joined: Thu Jul 12, 2012 7:29 am
Location: Tallinn, Estonia

Re: Scalability

Post by dozniak »

m12 wrote:Firstly, he suggested making a userspace application on top of unix which handles the connections itself rather than relying on unix.
My thoughts exactly - getting rid of unix kernel would solve the problem (since unix kernel IS the problem here). It would also remove layers of kernel-usermode switching and virtual space mapping to just deliver some data.
Learn to read.
User avatar
bluemoon
Member
Member
Posts: 1761
Joined: Wed Dec 01, 2010 3:41 am
Location: Hong Kong

Re: Scalability

Post by bluemoon »

This is going seriously wrong.

Any sane OS will not do context switch upon every IRQ or packet, it will surprise me if unix do.
It's more practical to queue thousand of events, and fetch by application at once in its own time slice, so no, context switch is no bottleneck here.

Furthermore, nobody is using the original unix kernel now, it has been well redesigned a few times, this includes linux too, except for "well design".
Mikemk
Member
Member
Posts: 409
Joined: Sat Oct 22, 2011 12:27 pm

Re: Scalability

Post by Mikemk »

@bluemoon, I interpreted the article as unix-clone, which includes linux.
Programming is 80% Math, 20% Grammar, and 10% Creativity <--- Do not make fun of my joke!
If you're new, check this out.
Post Reply