Will A.I. Take Over The World!

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

berkus wrote:
DavidCooper wrote:The most processor-intensive bits are likely to be in processing the input data and converting it into the right form for the intelligent part of the system to work with, though I suspect that the algorithms being used at the moment in self-driving cars are a very long way from being optimised, so it's not clear how much processing time would be required. I'd like to use a camera which has a variety of ways of sending out data to hack down the amount of processing that needs to be done, so it would send out different data streams representing different resolutions.
You're quite right, although I wouldn't guess and just read about it.
I've stored a copy of that, but I'm not going to read it at the moment as I don't want to be influenced by the way other people have done it. That needn't stop anyone else though.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

DavidCooper wrote:I can simulate in my mind a cat colliding in the air with a water balloon or a car with a mound of jelly. I would imagine that you can do these things too - this ability is extremely important.
You're not really simulating those things, but whatever your definition of the word, you don't come anywhere near simulating things like kinematics, cloth or hair physics, other people's brains, etc.
DavidCooper wrote:Intelligence of the kind we normally count as intelligence (bright vs. stupid) happens at the concept level where ideas are represented by codes or symbols, and it manifests itself as mathematical and logical calculations on that data.
You were intelligent before you knew any math or logic. Your logic comes from patterns in your brain that have evolved both over human history and the history of your brain, not from any kind of pre-built algorithms.

This is the biggest reason you're wrong. This is why Noam Chomsky is wrong. This is why the tradition method of building AI doesn't work. If you build in an algorithm, it can't adapt nearly as well or as efficiently as a brain.

Yes, brains have evolved things like the visual cortex, but there cannot be anything anywhere near a deterministic "logic" system, like the one you propose, built into the brain. Instead, anything beyond the "hardware" level of e.g. your retina's cells recognizing visual borders must come out of something more flexible- not gluing algorithms together.

This doesn't mean a synthetic intelligence couldn't receive information in "higher-level" forms like encoded text or spacial volumes. It also doesn't mean that that kind of information would be any better or worse than "lower-level" forms. It depends on what it's for.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Will A.I. Take Over The World!

Post by Brendan »

Hi,
Rusky wrote:This is the biggest reason you're wrong. This is why Noam Chomsky is wrong. This is why the tradition method of building AI doesn't work. If you build in an algorithm, it can't adapt nearly as well or as efficiently as a brain.
I'm not sure I agree! :)

The biggest reason why David Cooper is wrong is that he believes A.I. is actually desirable. You can't have intelligence without fallibility; and the last thing humans need is machines that screw things up, especially if the machines are "smarter" than humans (and therefore make it impossible for humans to determine when the machine has screwed things up).

There's basically 3 cases:
  • Cases where it's possible to have a system that follows rules that guarantee the result is correct. Here "no intelligence" is far superior to both human and artificial intelligence.
  • Cases where it's not possible to guarantee the result is correct, and the correct result matters. Here A.I. would be a serious problem, and humans will quickly learn they can't trust A.I. (in the same way that humans don't really trust each other, but without the way humans will accept/tolerate the mistakes other humans make).
  • Cases where it's not possible to guarantee the result is correct, and the correct result doesn't matter. Here A.I. is useful, but it's limited to things like predicting the weather (nobody expects the weather to be right), doing quality control on a production line (a few false positives or false negatives are "acceptable"), etc.

Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

Brendan wrote:You can't have intelligence without fallibility
This is pretty connected to what I was saying. He seems to believe in the possibility of well-defined, infallible intelligence, which is a technical impossibility (what I'm saying), and using fallible AI for what he wants wouldn't help (what you're saying).
Brendan wrote:There's basically 3 cases:
  • Cases where it's possible to have a system that follows rules that guarantee the result is correct. Here "no intelligence" is far superior to both human and artificial intelligence.
  • Cases where it's not possible to guarantee the result is correct, and the correct result matters. Here A.I. would be a serious problem, and humans will quickly learn they can't trust A.I. (in the same way that humans don't really trust each other, but without the way humans will accept/tolerate the mistakes other humans make).
  • Cases where it's not possible to guarantee the result is correct, and the correct result doesn't matter. Here A.I. is useful, but it's limited to things like predicting the weather (nobody expects the weather to be right), doing quality control on a production line (a few false positives or false negatives are "acceptable"), etc.
I strongly agree on the first and third case here- regular software is good at doing deterministic things very quickly, and AI can greatly improve over humans for things like weather prediction and quality control without any downsides.

I disagree on your second point though. If I'm reading it right, one example would be driving. It's impossible to guarantee perfect driving performance, and mistakes cost lives. However, AI could potentially bring massive improvements to self-driving car technology (wrt to both speed and accuracy). This one will be hard for people to accept because it might be hard to point a finger at exactly what went wrong, but as long as the AI causes less accidents than a human, it's a net gain.
User avatar
Jvac
Member
Member
Posts: 58
Joined: Fri Mar 11, 2011 9:51 pm
Location: Bronx, NY

Re: Will A.I. Take Over The World!

Post by Jvac »

turdus wrote:What I'm trying to say with this is that an AI could act like a real brain in 99.9999999999% but there'll be always a case where it will fail while the real brain (using a not yet understood fractal algorithm) could solve with ease. A good example of that is sense of humour, which seems easy, but in fact it's one of the most complicated thing to code.
Not a chance in our life time! Reasons for my opinion is that machines are unable to learn, to reason and eventually to have consciousness. Robots are more helpless than threatening. It took nature million years to develop our tools and our culture, therefore I think we are probably millions of years away from robots to taking over the world. AI is possible in theory but impossible in practice. So I think that untill we come up with some algorithm like turdus mention A.I is limited to it's usage.
"The best way to prepare for programming is to write programs, and
to study great programs that other people have written." - Bill Gates


Think beyond Windows ReactOS®
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Will A.I. Take Over The World!

Post by Brendan »

Hi,
Rusky wrote:I disagree on your second point though. If I'm reading it right, one example would be driving. It's impossible to guarantee perfect driving performance, and mistakes cost lives. However, AI could potentially bring massive improvements to self-driving car technology (wrt to both speed and accuracy). This one will be hard for people to accept because it might be hard to point a finger at exactly what went wrong, but as long as the AI causes less accidents than a human, it's a net gain.
Humans are humans though. If there's an accident involving 2 cars driven by A.I. then you blame A.I. If there's an accident involving a car driven by a human and a car driven by an A.I., then you blame the A.I. If there's an accident involving 2 cars driven by humans then that's "normal" (accidents happen). A.I. will get 90% of the blame for 10% of the accidents.

A.I. would have to be at least 10 times better than humans before humans start to actually believe A.I is equal; and then people are going to look at the price difference and won't buy A.I. cars anyway.

Now imagine you're a car manufacturer - would you want to pioneer this technology, given how hard it's going to be to market the cars after you've spent a huge amount of $$ on research and development? Are you going to want to be the first car manufacturer in history to be sued for damages caused by "driver error"? Why on earth would any car manufacturer want all the financial risks when they can make just as much profit doing what they've always done?

Now imagine a car that follows a system of rules. You can get maps and do pathfinding; and make it so that it takes into account things like traffic light information, weather conditions, traffic congestion, etc. You can have a sonar/radar system and track the position and trajectory of nearby objects and use that to limit speed. You can follow established rules for intersections, over-taking, etc. Best of all, you could guarantee that if there's an accident it wasn't the computer's fault. For a similar amount of research and development and a cheaper price tag for the final car, you could do a lot better than A.I. with no intelligence at all.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Will A.I. Take Over The World!

Post by Combuster »

turdus wrote:
Brendan wrote:If a neuron's output ranges from "fully off" to "fully on" in extremely tiny increments, how many bits do you need to (adequately) represent the output of a neuron?
No one can say. You're mistaken about how neurons store information, it's not digital, it's an unknown fractal way. For example if you destroy 10% of a 400G RAM, then 10% of the information will be lost. On the other hand if you damage 10% of the brain, all information will be still available.
It's not an unknown - neurons connect to a large number of other neurons and when activated drop a certain amount of chemicals on each of the neurons at the receiving end. The receiving neuron is triggered whenever there's sufficient triggering chemicals and not enough inhibiting chemicals to counter the effect.

As far as information goes, it is stored in an associative way. Each time your brain triggers over a concept, a number of related concepts activate as suggestions because of that. If the concept is relevant it will gain additional input and activate more related neurons.
The trick is that such relations are stored in numerous places. If a small part dies there is redundancy to rebuild the association based on the associations around it, just like a RAID 5, but you have to activate the region for that process to work. The problem is that if you actually remove 10% of the brain you remove the area of an entire cortical region and you can take out your entire vision or personality as a result, or outright kill you if you try that with the autonomous system. I'm well aware that the brain can recover from such damage, but it'll take time and it pretty much sets you back so many years in development. Similarly, I don't trust that there will be no damage if you kill every tenth neuron because there will be effects, the redundancy just might be enough to get out the necessary information to pass the test, but it will be more difficult.

As far as harddisks go, we do the same thing; if you lose 10% of sectors, you might actually lose less than 1% of the information stored in there, the other 9% you can get back by reinstalling windows and starting your bittorrent client again. The effect is even less if you do RAID and backups like your brain. Even if your RAM loses power, only a fraction of its contents will be actually considered lost, the rest can be automagically recreated by restarting the computer when mains comes back.

In other words, that's a severely biased statement unless proven with scientific papers.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Will A.I. Take Over The World!

Post by Solar »

Brendan wrote:Now imagine a car that follows a system of rules. You can get maps and do pathfinding; and make it so that it takes into account things like traffic light information, weather conditions, traffic congestion, etc. You can have a sonar/radar system and track the position and trajectory of nearby objects and use that to limit speed. You can follow established rules for intersections, over-taking, etc. Best of all, you could guarantee that if there's an accident it wasn't the computer's fault. For a similar amount of research and development and a cheaper price tag for the final car, you could do a lot better than A.I. with no intelligence at all.
Even better, it already has been done.
Every good solution is obvious once you've found it.
User avatar
turdus
Member
Member
Posts: 496
Joined: Tue Feb 08, 2011 1:58 pm

Re: Will A.I. Take Over The World!

Post by turdus »

Combuster wrote:It's not an unknown - neurons connect to a large number of other neurons and when activated drop a certain amount of chemicals on each of the neurons at the receiving end. The receiving neuron is triggered whenever there's sufficient triggering chemicals and not enough inhibiting chemicals to counter the effect.
Wrong about that, you made the same mistake as Rusky. Just because axons transferring electricity it's tempting to say brain is digital and store info in RAID fashioned way. But that's bullshit. It's irrelevant, how information transmitted. The point is, how it's interpreted, and we simply do not know that.

Read scientific papers, like this few years old article:
http://www.psychologytoday.com/blog/the ... l-thoughts
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Will A.I. Take Over The World!

Post by Solar »

turdus wrote:Wrong about that, you made the same mistake as Rusky. Just because axons transferring electricity it's tempting to say brain is digital and store info in RAID fashioned way. But that's bullshit. It's irrelevant, how information transmitted.
Actually, Combuster didn't say what you read into it. His description of how neurons "fire" depending on activator / inhibitor chemicals is correct. And as for information storage, he used RAID only as a (very rough) analogy.
Every good solution is obvious once you've found it.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

Jvac wrote:Not a chance in our life time! Reasons for my opinion is that machines are unable to learn, to reason and eventually to have consciousness. Robots are more helpless than threatening. It took nature million years to develop our tools and our culture, therefore I think we are probably millions of years away from robots to taking over the world. AI is possible in theory but impossible in practice. So I think that untill we come up with some algorithm like turdus mention A.I is limited to it's usage.
What is your brain then? Your brain is constrained by the same laws of physics as machines, but machines have the significant advantage of being built by humans. Humans have already built simulated brains- they don't have to wait millions of years for random mutations and an environment that just happens to favor intelligence for AI to evolve.

On the car thing- I don't think a non-intelligent program has all that much more chance of being absolved of guilt- any software system that complicated will have bugs and they will cause problems. In any case, what's stopping an AI system from being your required 10x better? Nobody's actually experimented with it so we don't know for sure, but it's enough of a possibility that it could be safer, cheaper, etc. that I don't think we can rule it out at this point.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Will A.I. Take Over The World!

Post by Solar »

Rusky wrote:Humans have already built simulated brains...
Nope. They have simulated minuscle parts of a brain.

Actually they didn't even do that. They have simulated an abstraction of minuscle parts of a brain, because they didn't simulate the chemical environment - hormones, inhibitors etc. - they abstracted it to neuron firing / not firing, and they can't really say if that isn't leaving out important parts.
...they don't have to wait millions of years for random mutations and an environment that just happens to favor intelligence for AI to evolve.
It took millions of years of massively parallel selection to come up with the concept of "human brain". Each of the billions of human brains on this planet has been configured by countless numbers of environmental inputs for the better part of two decades before an "adult" emerges - with a good portion of those being outright failures at being a judge, or a software engineer, or an aircraft pilot (or even a decent driver, or a decent human being to begin with).

As for whether the environment "favors" intelligence, that remains to be seen. For a time, it seemed as if the environment "favored" size. In the end, though, the Dinosaurs were wrong, you know?

We don't really understand which part of "the real thing" we could abstract away and still get something resembling "real" intelligence. On the one side all we would get would be an expert system with a different name (incapable of decisions it has not been explicitly programmed for), and on the other side we might get the digital equivalent of a drooling idiot. You'd be successfully emulating a human brain, but it just might be that of a drooling idiot.

In the end, it is not about emulating "the real thing", it's about finding the correct abstractions. The last five decades in that field have been a failure, short and simple.
In any case, what's stopping an AI system from being your required 10x better? Nobody's actually experimented with it so we don't know for sure...
I'll tell you what's stopping them: They simply and utterly don't exist. Perhaps "yet", perhaps "ever", but certainly "not in the next 20 years". This whole discussion is highly hypothetical. We aren't "one step away" from achieving anything with A.I., we are still in elementary research.

I don't say that people shouldn't research them, and I don't say they might not actually make a breakthrough at some point. But to hop around today and chanting "the great A.I.'s will come" strikes me as a bit hasty. Hum-hom.
Every good solution is obvious once you've found it.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

I definitely agree with pretty much everything you said. But the last five decades have been going at AI from the "gluing algorithms together" perspective. On the other hand, we know that "the real thing" is capable of much more than the current way of doing things.

It would be a good, definite step forward to start experimenting, and discover more about what parts of the brain are necessary for intelligence- that's how we'll find the correct abstraction. Our process would have (has) a lot of advantages over evolution, and if nothing else we'll end up with a good handwriting recognition system. :)

I don't think it's hasty to predict that we'll have good AI, although it is silly to predict that it will do what DavidCooper thinks it will, or to try to pick a job that won't be displaced by it, or whatever.
User avatar
turdus
Member
Member
Posts: 496
Joined: Tue Feb 08, 2011 1:58 pm

Re: Will A.I. Take Over The World!

Post by turdus »

Solar wrote:
turdus wrote:Wrong about that, you made the same mistake as Rusky. Just because axons transferring electricity it's tempting to say brain is digital and store info in RAID fashioned way. But that's bullshit. It's irrelevant, how information transmitted.
Actually, Combuster didn't say what you read into it. His description of how neurons "fire" depending on activator / inhibitor chemicals is correct. And as for information storage, he used RAID only as a (very rough) analogy.
He was talking about how neurons fire, sure, it's correct, but it tells you nothing about how information is stored, that's the point. Therefore his (very rough) analogy is wrong. For example
"If the concept is relevant it will gain additional input and activate more related neurons."
There're no "related neurons" at all, the whole brain stores the concept in a fractal.
Read the article I linked.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Will A.I. Take Over The World!

Post by Combuster »

There're no "related neurons" at all, the whole brain stores the concept in a fractal.
Did you know you just claimed there are no relations in a fractal? Or that you just claimed that neurons can't connect because they are forbidden to have a relationship? Your statement is a contradiction to established facts. :^o

I already went through that article. It does not discuss anything that's closely related to what I actually stated in my post and based on that comment I think you understood neither that nor my post.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
Post Reply