Will A.I. Take Over The World!

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Will A.I. Take Over The World!

Post by Brendan »

Hi,

I've heard that scientists have managed to simulate half of a mouse's brain using a massive super computer. From this we can do some rough estimations...

A mouse's brain is about 0.4 grams. Half a mouse's brain is about 0.2 grams. An average human's brain is around 1500 grams. This implies that (if weight alone is a reasonable indicator) to simulate a human's brain you'd need around 7500 times the processing as simulating half a mouse's brain; or about 7500 massive super computers.

We all (should) know that twice as many processors doesn't mean twice as much processing - there's scalability problems, interconnects, etc. It'd be much more reasonable to assume you'd need 10000 massive super computers to get the amount of processing needed. Then, if you assume that the software being used for the mouse brain was only "good" and could be improved by a factor of 10 (which I think is unlikely, but why not), that gets us down to 1000 massive super computers to simulate a human brain.

But; simulating an entire human brain would be over-kill. It's fair to assume that about half of a real human's brain is wasted on things like where they left their car keys, what they ate for breakfast, remembering their spouse's birthday, breathing, etc. You'd probably be able to replace a human with half a simulated human brain. This means we're only looking at 500 massive super computers to replace one human.

Of course the "massive supercomputer" was actually an IBM BlueGene with 4096 processors (running at 700 MHz) and 1 TiB of RAM. If we need 500 of these then we'd be looking for a computer with 2048000 CPUs and 500 TiB of RAM (definitely not the average notebook you'd get from Wallmart).

If Moore's law continues to be a reasonable guide (twice as many transistors per year), and we start from this year with 16 CPUs and 8 GiB of RAM; then we might be able to expect "2048000 CPU" systems in about 17 years. Unfortunately this isn't a correct interpretation of Moore's law - twice as many transistors doesn't equate to twice as much processing power. A more realistic guide would be "twice as much processing power every 3 years". In that case we're looking at 51 years before we get enough processing power to simulate a human brain in "commodity" computers.

However; no sane person thinks transistor counts or processing power can continue to increase indefinitely. Even Gordon Moore has stated (in 2005) that it can't continue forever. There's bound to be physical limits (e.g. speed of electron/light), and the closer we get to those limits the harder further improvement is going to be. The thing is we've already reached the "maximum clock speed" limit - only a few CPU manufacturers have managed to get up to 6 GHz, and for a practical CPU (with good yield, etc) the limit is closer to 4 GHz. You'll notice that Intel managed to get close to 4 GHz with the Pentium 4 (about 10 years ago) and hasn't been able to improve speed since. The next big problem is heat.

To reduce heat the normal approach is to reduce clock speed. For example, rather than having 100 CPUs at 1 Ghz it's better to have 400 CPUs at 500 Mhz and do (in theory) twice as much processing for the same amount of heat. That's why the Blue Gene CPU's were only running at 700 MHz. To pack enough processing power into a small enough size, you'd probably need to reduce CPU speed by a factor of 4 (e.g. down to about 200 Mhz) and use 4 times as many CPUs (e.g. about 8000000 CPUs); and then use water cooling and/or refrigeration to remove the heat quickly enough. Maybe we will have enough processing power to replace a human with A.I in about 50 years (and they'd be about the size of a refrigerator, be twice as noisy and generate around 100 times more heat).

So... it does seems plausible! BUT...

... it ignores the laws of supply & demand.

Some jobs are easy - for example, A.I. and robotics has already replaced a lot of the mundane/repetitive work in factories. As A.I. gets more advanced it's reasonable to assume it will replace more human workers, starting from other mundane/repetitive work and gradually moving up to more complicated work. While this is happening the demand for human workers will decrease, and therefore the cost of human workers will decrease. Eventually we'd reach a point where (for complex work) the cost of human workers is less than the cost of adequate A.I. If you take this to the extreme, you're going to have an oversupply of bored humans that are willing to work for free just for the challenge, where the cost of A.I (including initial purchase, training, maintenance and power consumed) simply can't be justified.

The end result isn't A.I. taking over the world. The end result is that maybe in 50 years time (and probably more like 100 years) A.I. will take away all the boring work that humans don't want to do anyway.

I'd also expect a decrease in the cost of goods and services, and a decrease in the time humans spend working (e.g. maybe for humans "full time work" will be 20 hours per week instead of 38). I'd also expect a lot more online jobs (were people don't go to work, but work at home instead). Combined with an increase in online shopping we'll see a huge reduction in things like road traffic (and a downturn in things like car manufacturing and an increase in personal transport - moped, push bike, segway). I'd also expect an huge increase in entertainment (more movies, more computer games and more porn), simply because people have more time to spend.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

A human brain is about 100 billion neurons at maybe 50Hz (making up about half a million cortical columns in the neocortex). It uses about 30 W. Instead of trying to simulate a brain with a bunch of networked von Neumann processors, it's possible to greatly improve on a human brain's capacity with specialized hardware.

Neurons are very large, slow, and low-power compared to state-of-the-art electronic hardware. I would guess that the supercomputers of the future will be large neural networks, far more capable than humans, that do things like predicting the weather, researching and analyzing enormous amounts of data, predicting businesses/markets/governments/societies, etc.
User avatar
gravaera
Member
Member
Posts: 737
Joined: Tue Jun 02, 2009 4:35 pm
Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.

Re: Will A.I. Take Over The World!

Post by gravaera »

So basically we'll be able to play video games without controllers soon? Like attach a kind of headgear that detects brainwaves and plugs into your spine, and your character is controlled by brainwaves?

I like where this is going 8)
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Will A.I. Take Over The World!

Post by Solar »

gravaera wrote:So basically we'll be able to play video games without controllers soon? Like attach a kind of headgear that detects brainwaves and plugs into your spine, and your character is controlled by brainwaves?
This is being done today already (in medical research, for people who lost sight, limbs, or control of limbs). Moving a cursor by thinking is possible today, even "seeing" Geordy LaForge-style can be done.
Every good solution is obvious once you've found it.
User avatar
turdus
Member
Member
Posts: 496
Joined: Tue Feb 08, 2011 1:58 pm

Re: Will A.I. Take Over The World!

Post by turdus »

I couldn't follow everything you wrote Brendan, you make assumptions to easily. If you count super computers, why do you apply Moore's law to a commertial computer for example?

Besides number of cpus does not solve the problem at all. We'll need a new concept to create real AI. All we can do (and in the near future) is just simulating what a real brain would do. And like every simulation, it's limited to test cases and scenarios, while a real world brain is not.

What I'm trying to say with this is that an AI could act like a real brain in 99.9999999999% but there'll be always a case where it will fail while the real brain (using a not yet understood fractal algorithm) could solve with ease. A good example of that is sense of humour, which seems easy, but in fact it's one of the most complicated thing to code.
User avatar
bluemoon
Member
Member
Posts: 1761
Joined: Wed Dec 01, 2010 3:41 am
Location: Hong Kong

Re: Will A.I. Take Over The World!

Post by bluemoon »

To simulate human brain you need to add lots of bugs in the A.I. code, and Nature take care the rest. :mrgreen:
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

A few things to consider:-

How much of the brain is used for information storage rather than processing? Think how small a 32GB microSD card is and how much data it can hold. 2TB SD cards are in development, so they're really able to pack the stuff into a small space now. What is the memory capacity of the human mind?

How much extra material is required to create a piece of brain that carries out a simple function by way of a trained neural net rather than using a tightly-designed unit which does the same job using the most simple structure? How much space can you save by using a processor and software stored in very compact memory before you begin to lose out in terms of relative calculation speed?

How much of the size of human brains is simply related to the size of humans rather than being directly related to intelligence? Whales have huge brains but are believed to be about as bright as cattle, while parrots, despite having small brains, have in some individual cases have been determined to be about as bright as four-year-old human children.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

The brain doesn't store information the same way an SD card does. It doesn't even really separate memory and processing.

The entire point of simulating a brain would be opposed to any kind of "tight design" with simpler structures, because then you don't get any of the advantages of a brain.

Intelligence is related less to overall brain size and more to the proportion of the brain dedicated to the neocortex, which is much, much higher in humans than in whales.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Rusky wrote:The brain doesn't store information the same way an SD card does. It doesn't even really separate memory and processing.

The entire point of simulating a brain would be opposed to any kind of "tight design" with simpler structures, because then you don't get any of the advantages of a brain.
This isn't about simulating the human brain - it's about matching its intelligence. A great mass of knowledge can probably be stored more efficiently using hard disks or flash memory. You said earlier that a human brain has 100 billion neurons, so how much capacity does that provide for memory compared with 100GB of storage? What's your best guess as to a ratio?
Intelligence is related less to overall brain size and more to the proportion of the brain dedicated to the neocortex, which is much, much higher in humans than in whales.
It would be interesting to see the stats on parrots. We also need to know how much of the brain is necessary for things like vision and motor control. Birds have fantastic vision, although they do fly into windows, so there may be a lack-of-processing issue there. Then again, I've seen sparrows fly at full speed through a mesh fence where the gaps were only a little bigger than the body of the birds - they had to fold their wings right in. Hoverflies have fantastic flight control too - I've watched them zip in and out through between the spokes of a slowly-rotating bicycle wheel (with the bike upside down during a repair).

You really need to ask what all these intelligence functions are that require so much brain furniture to perform them. Split up the whole thing up into distinct units so that you don't just mix everything up into a complex morass. You need a unit to take visual data and convert it into information in a database: a 4D simulation where objects are stored as codes representing what the items are, co-ordinates stating where they are located, plus other data to handle orientation, etc. Intelligence only comes into play when you start simulating events with that data to predict the future or calculate a future action, and even then most of it will be simple physics simulation. Similarly with hearing, you have automated systems converting the string of sound into data that breaks the wave apart into its component sounds, and if there is speech to be interpreted that will be handled there too before applying any intelligence to the content of the speech. By the time intelligence is applied, you're working with a stripped down version of the external world in which you're already at the level of words, grammar and the kinds of data that fill computer game databases. This data can be stored extremely efficiently in flash memory or on a hard disk.

The real issues with intelligence relate to how you process that data, and all the voodoo has already been done by simple (though possibly bulky) systems which convert the outside world into simple data that can be crunched. Now start to think about how you apply your intelligence to a problem. Do you solve it in parallel with multiple parts of the problem being solved simultaneously before all those parts of the solution are brought together to handle the problem? Or do you go through it sequentially? I reckon you do it sequentially, though you will later automate the process and may end up doing parts of it in parallel. Think about how you write a computer program: you do one little bit of it at a time, building up something that carries out a much more complex set of actions that create the whole package - if you try to write two parts of it simultaneously, you mess them both up.

I reckon human level A.I. could run on a bog-ordinary netbook and absolutely thrash humans at thinking. The most processor-intensive bits are likely to be in processing the input data and converting it into the right form for the intelligent part of the system to work with, though I suspect that the algorithms being used at the moment in self-driving cars are a very long way from being optimised, so it's not clear how much processing time would be required. I'd like to use a camera which has a variety of ways of sending out data to hack down the amount of processing that needs to be done, so it would send out different data streams representing different resolutions. The first level of the analysis should be done with perhaps 64 large pixels covering the entire field of view, missing out the step of having to add lots of smaller pixels together. The next level might use 1024 pixels and anything that can be recognised at that level of detail needn't be looked at again at the next level of detail at all (or at least not as a priority if it isn't something that's unlikely to need further analysis). A preprocessor could also merge the information coming in from two cameras to create distance information tags to go with different parts of the data, splitting it into different sets to make identifying image content faster. None of that is intelligence, although it's clearly all going to be a vital part of an A.I. system for controlling a robot with the capabilities of a human.

I don't have time to write a book covering all this, but given that everyone here reckons they're geniuses, they should be able to get their heads round a very simple overview of how the system would work. My point is, don't mix different functionality together into a great slodge of stuff that you can't understand - break it down into distinct components and make your arguments about those.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

DavidCooper wrote:A great mass of knowledge can probably be stored more efficiently using hard disks or flash memory. You said earlier that a human brain has 100 billion neurons, so how much capacity does that provide for memory compared with 100GB of storage? What's your best guess as to a ratio?
Information can be stored more efficiently (not to mention deterministically) in traditional digital devices. That doesn't mean it's usable by any kind of intelligent system while in that state.
DavidCooper wrote:Intelligence only comes into play when you start simulating events with that data to predict the future or calculate a future action, and even then most of it will be simple physics simulation.
Completely backwards. Brains are so efficient because they don't simulate anything. People play catch by pattern-matching, not by studying kinematics. This is why humans can walk and speak and listen, with "only" a "morass" of neurons, and do it far better than robots, which have only barely attained the power necessary to do things your way.
DavidCooper wrote:I reckon human level A.I. could run on a bog-ordinary netbook and absolutely thrash humans at thinking.
It's clear that super-human AI could run on something the size and power of a netbook, but it's also very clear that it could not run on anything remotely similar to a netbook without emulating the brain and it's massively parallel and non-deterministic pattern-matching.
DavidCooper wrote:My point is, don't mix different functionality together into a great slodge of stuff that you can't understand
Intelligence is hierarchical- it starts at the lowest level of recognizing patterns from, for example, the retina or the eardrum. Then it works its way upward, recognizing more and more complex patterns like faces, text, speech, etc. That doesn't mean you can engineer it by gluing together algorithms.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Will A.I. Take Over The World!

Post by Brendan »

DavidCooper wrote:
Rusky wrote:The brain doesn't store information the same way an SD card does. It doesn't even really separate memory and processing.

The entire point of simulating a brain would be opposed to any kind of "tight design" with simpler structures, because then you don't get any of the advantages of a brain.
This isn't about simulating the human brain - it's about matching its intelligence. A great mass of knowledge can probably be stored more efficiently using hard disks or flash memory. You said earlier that a human brain has 100 billion neurons, so how much capacity does that provide for memory compared with 100GB of storage? What's your best guess as to a ratio?
If a neuron's output ranges from "fully off" to "fully on" in extremely tiny increments, how many bits do you need to (adequately) represent the output of a neuron? If a 32-bit value is precise enough, then 100 billion neurons would need about 3200 billion bits, or about 400 GB (or about 373 GiB). Of course this would need to be 400 GB of RAM - trying to update the state of 100 billion neurons stored on disk would be insanely slow.

Maybe with just 40 GB of RAM you could have a machine that is smart enough to store excess information externally and remember how to get that information again (as there's no point storing something if you can't remember where you stored it). Of course the penalty is latency - for example; if you ask a stupid machine to add a pair of numbers together, do you want to wait for half an hour while the stupid machine tries to comprehend the "addition" wikipedia page?

Of course this got me thinking about my original post - for the "half a mouse brain" experiment, I'm not even sure if they managed to do it in real-time. For all I know it could've taken 100 hours of processing just to simulate 1 minute of "mouse brain" activity. In this case, all my rough estimates will be severely wrong.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
MasterLee
Member
Member
Posts: 90
Joined: Fri Mar 13, 2009 8:51 am

Re: Will A.I. Take Over The World!

Post by MasterLee »

Brendan wrote:A mouse's brain is about 0.4 grams. Half a mouse's brain is about 0.2 grams. An average human's brain is around 1500 grams. This implies that (if weight alone is a reasonable indicator) to simulate a human's brain you'd need around 7500 times the processing as simulating half a mouse's brain; or about 7500 massive super computers.
...
Maybe we will have enough processing power to replace a human with A.I in about 50 years (and they'd be about the size of a refrigerator, be twice as noisy and generate around 100 times more heat).
Wikipedia wrote:By 2005 the first single cellular model was completed. The first artificial cellular neocortical column of 10,000 cells was built by 2008. By July 2011 a cellular mesocircuit of 100 neocortical columns with a million cells in total was built. A cellular rat brain is planned for 2014 with 100 mesocircuits totalling a hundred million cells. Finally a cellular human brain is predicted possible by 2023 equivalent to 1000 rat brains with a total of a hundred billion cells.
So when they are right we have 2023 simulated brains on Supercomputers and 2040 on your mobile phone.
50₰
User avatar
turdus
Member
Member
Posts: 496
Joined: Tue Feb 08, 2011 1:58 pm

Re: Will A.I. Take Over The World!

Post by turdus »

Brendan wrote:If a neuron's output ranges from "fully off" to "fully on" in extremely tiny increments, how many bits do you need to (adequately) represent the output of a neuron?
No one can say. You're mistaken about how neurons store information, it's not digital, it's an unknown fractal way. For example if you destroy 10% of a 400G RAM, then 10% of the information will be lost. On the other hand if you damage 10% of the brain, all information will be still available.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

turdus wrote:
Brendan wrote:If a neuron's output ranges from "fully off" to "fully on" in extremely tiny increments, how many bits do you need to (adequately) represent the output of a neuron?
No one can say. You're mistaken about how neurons store information, it's not digital, it's an unknown fractal way. For example if you destroy 10% of a 400G RAM, then 10% of the information will be lost. On the other hand if you damage 10% of the brain, all information will be still available.
That's got nothing to do with neurons being on and off, which very well could be represented as some number of bits. The information would be a level above that, yes, but that's irrelevant.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Rusky wrote:Information can be stored more efficiently (not to mention deterministically) in traditional digital devices. That doesn't mean it's usable by any kind of intelligent system while in that state.
I don't know which is more efficient in terms of packing data in, but it strikes me that you could hold an awful lot of video in a tiny amount of space (picture a stack of 2TB SD cards), and a hell of a lot more event memories stripped down to mere concept codes where the scene can be regenerated very inaccurately from them but sufficiently well to do the job (which seems to be how we store event memories, only getting the essentials right and even then not being entirely sure that they are correct - we don't store visual data in such a way as to recreate all the pixels accurately, but instead just have a framework version of the action which we rebuild in our heads, much like reading a book and visualising it as if we're watching a film). Likewise, you could store an astronomical amount of knowledge in a form more like text (though structured in a form that makes the processing easy), and if the data is stored in forms that fit in with the way the processing is to be done (which they obviously will be), then it will be highly efficient accessing and processing the data. With flash memory you also have rapid access to any part of it without seek time delays.

We also know that most people forget almost everything that happens to them on a daily basis, just holding on to the unusual things and merged versions of multiple events. Some individuals however appear to be able to remember everything that's happened to them in their lives to the point that if you ask them what they were doing ten years ago on a particular date and time, they'll tell you, and tell you what they were wearing and what the weather was like. This suggests that most of us aren't using more than a tiny fraction of the available capacity. On the other hand, my grandfather (a Biblical scholar and multilinguist) knew a fellow academic who had a photographic memory and could remember every word of everything he had ever read (while reading at speed). At 40 years old he had a mental breakdown and the ability to add to the collection was gone: he appeared to have run out of storage space. Sadly he can't be studied now, but there may be similar individuals out there who could.
DavidCooper wrote:Intelligence only comes into play when you start simulating events with that data to predict the future or calculate a future action, and even then most of it will be simple physics simulation.
Completely backwards. Brains are so efficient because they don't simulate anything. People play catch by pattern-matching, not by studying kinematics. This is why humans can walk and speak and listen, with "only" a "morass" of neurons, and do it far better than robots, which have only barely attained the power necessary to do things your way.
What I said was not incorrect in that way (and further was fully correct in its context). If you are going to attempt a stunt on your mountain bike, you simulate it in your mind first to check whether you think you can make it or not. When we solve problems we simulate events in our minds too - a crow can simulate in its head the idea of dropping stones into a tall tube of water to raise the level sufficiently to be able to reach the food item floating on the top, and it then carries out the action. I can simulate in my mind a cat colliding in the air with a water balloon or a car with a mound of jelly. I would imagine that you can do these things too - this ability is extremely important.
DavidCooper wrote:I reckon human level A.I. could run on a bog-ordinary netbook and absolutely thrash humans at thinking.
It's clear that super-human AI could run on something the size and power of a netbook, but it's also very clear that it could not run on anything remotely similar to a netbook without emulating the brain and it's massively parallel and non-deterministic pattern-matching.
It doesn't need to do most of the pattern matching - it can get input in simple forms such as ordinary text and then analyse it at far higher speeds than humans. It seems likely to me that it might take something in the order of a hundred times as long for it to understand a piece of text as a search engine would take just to read and index it.
DavidCooper wrote:My point is, don't mix different functionality together into a great slodge of stuff that you can't understand
Intelligence is hierarchical- it starts at the lowest level of recognizing patterns from, for example, the retina or the eardrum. Then it works its way upward, recognizing more and more complex patterns like faces, text, speech, etc. That doesn't mean you can engineer it by gluing together algorithms.
It absolutely does mean you can glue together algorithms to create the whole system, but most of the steps along the way are not intelligence. Is a blind person less intelligent than a sighted one? By some definitions, perhaps, but most of us would say no. Intelligence of the kind we normally count as intelligence (bright vs. stupid) happens at the concept level where ideas are represented by codes or symbols, and it manifests itself as mathematical and logical calculations on that data. If I want to access the food in the tube, I need to get the food nearer to the top. If there was more water in the tube, the food would be higher up and could be reached. If stones are added to the water, that will have the same effect as having more water in the tube. If I drop stones in the water, the food will rise nearer to the top and I'll be able to reach it. That is how intelligence happens, though the depth of thinking is also important as a more intelligent system can handle more complex chains of if-then consequences where there are multiple causes and obstructions.
Last edited by DavidCooper on Tue Jan 17, 2012 7:24 pm, edited 2 times in total.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Post Reply