Will A.I. Take Over The World!

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Will A.I. Take Over The World!

Post by Solar »

turdus wrote:"If the concept is relevant it will gain additional input and activate more related neurons."
There're no "related neurons" at all, the whole brain stores the concept in a fractal.
Bull, sorry.
Read the article I linked.
You really think an internet article trumps an advanced course and several years of undergraduate studies in Biology (with psychology as secondary subject)? :wink:
Every good solution is obvious once you've found it.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Rusky wrote:
DavidCooper wrote:I can simulate in my mind a cat colliding in the air with a water balloon or a car with a mound of jelly. I would imagine that you can do these things too - this ability is extremely important.
You're not really simulating those things, but whatever your definition of the word, you don't come anywhere near simulating things like kinematics, cloth or hair physics, other people's brains, etc.
I didn't say we do perfect simulations, but we can do them very well and train ourselves to do them better in those areas which particularly interest us. Snooker players can see several shots ahead of the current position because they've trained themselves to be able to simulate the shots in their minds. A skier simulates the run in their mind before racing down it - this is particularly important in those events where they aren't allowed to ski the course in advance. It may be, of course, that you are particularly poor at simulating events in your head, so you don't believe that other people can do it.
DavidCooper wrote:Intelligence of the kind we normally count as intelligence (bright vs. stupid) happens at the concept level where ideas are represented by codes or symbols, and it manifests itself as mathematical and logical calculations on that data.
You were intelligent before you knew any math or logic. Your logic comes from patterns in your brain that have evolved both over human history and the history of your brain, not from any kind of pre-built algorithms.
Maths is mainly needed in an A.I. system for probability calculations - we aren't generally all that hot on judging probabilities, so we probably only do it extremely roughly and shallowly until trained to think things through more carefully. The fact that many tribes have languages with no number words beyond "one" and "many" illustrates that maths has not been a high priority in our evolution. Logical calculations too may be almost entirely absent in most stupid people, and yet they can still dress themselves and shovel food into their mouths. For the most part it's something you learn to do over time, though certain genetic differences may make it easier for some people to do than others. If you just allow an A.I. system to develop its own way of doing logical reasoning, you will end up with lots of different systems doing it differently and incorrectly, just like with people. That is why the system must be designed to do it correctly from the start so that its pronouncements don't add to the tons of mere babble that comes out of most natural intelligence systems.
This is the biggest reason you're wrong. This is why Noam Chomsky is wrong. This is why the tradition method of building AI doesn't work. If you build in an algorithm, it can't adapt nearly as well or as efficiently as a brain.
Like a calculator, you mean? People do arithmetic so much better than calculators because human brains are more adaptable and efficient? Is that why none of us find calculators useful? Have you ever stopped to look at your success rate at all the points where you've told me I'm wrong? Go back now and have a look now to see if you've ever got any of them right. And here you're even saying that this is the biggest one!
Yes, brains have evolved things like the visual cortex, but there cannot be anything anywhere near a deterministic "logic" system, like the one you propose, built into the brain. Instead, anything beyond the "hardware" level of e.g. your retina's cells recognizing visual borders must come out of something more flexible- not gluing algorithms together.
You aren't reading what I'm saying properly - you're just making wild assumptions as you go along. The brain has evolved to provide the correct structures for logical reasoning to be built in, but the logical reasoning functions probably do have to be built up in early childhood through interaction with the outside world rather than being directly programmed by DNA. Some people by luck end up with high-functioning systems which enable them to solve Rubik's Cube, but most fall short. Some may have a genetic advantage as I said before, which is possibly why I have three close relatives who solved Rubik's Cube for themselves (this was back at the start of the '80s - one was an uncle [a mathematician/logician], another was a cousin [a son of another mathematician], and the third was my mother, although she had to be given quite a few hints on strategy), whereas only two people at my school (a secondary with a thousand people in it) managed to solve it (one in first year [me] and the other in sixth year) - everyone else gave up a few months afterwards and just learned some grindingly slow book method. Another possible explanation though is that it's largely down to self-belief: if you know that your older cousin has solved it, you are determined to do likewise. The cube is actually sufficiently easy that I think 20% of people could solve it for themselves if they weren't so quick to give up. If you are designing an A.I. system, you would not leave any parts of the intelligence part of that to chance - if you know how you solve problems, you will obviously make sure the same mechanisms are programmed into your A.I. system so that it will be able to solve those problems too. Most of the time we are dazzled by problems and simply can't find a way in, and that's why I initially spent two months making no inroads into solving Rubik's Cube. It was only then that I was watching a beginner one morning trying to complete a single side and noticed that she put a corner into place using three moves where I would have done it using four. I instantly realised that if I took a corner out using my four moves and put it back using her three, I could preserve the top four corners while disrupting the ones underneath - a few hours later I had worked out how to apply that seven-move combination in different ways to get all the corners into the right places and correctly orientated. The rest was relatively easy and I had the whole thing solved early that evening. The big lesson I learned from Rubik's Cube, however, was that the bulk of the problem is simply getting your mind round it and working out how to split it into manageable components. That is the hardest thing for A.I. to do as well, but by looking at the way you solve problems you can find clues as to how A.I. can go about it. I'm not going to spell out any further detail on that - anyone who wants to create an A.I. system will want to do their own work on that so that they don't miss out on all the fun.
Last edited by DavidCooper on Wed Jan 18, 2012 4:34 pm, edited 1 time in total.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Brendan wrote:The biggest reason why David Cooper is wrong is that he believes A.I. is actually desirable. You can't have intelligence without fallibility; and the last thing humans need is machines that screw things up, especially if the machines are "smarter" than humans (and therefore make it impossible for humans to determine when the machine has screwed things up).
If the machines screw things up less than people do, we're better off. If multiple systems agree with each other despite being independently designed and if they've shown themselves to be as or more intelligent as/than people in all the areas that are relevent to a particular issue, they will do as good as or a better job than the best human minds would do, and in most situations the best human minds are unable to prove to the armies of sheep who get to vote on everything that they are right, so we end up going with what the biggest flock of sheep wants every time. With A.I., however, these intelligent systems will not just make pronouncements, but they will show their working as well - the entire map of how they came to their conclusions will be made available for all to check, and a big flock of sheep which believes that something is true which is demonstrably false will be shown in no uncertain terms why and where they are wrong.
There are basically 3 cases:
  • Cases where it's possible to have a system that follows rules that guarantee the result is correct. Here "no intelligence" is far superior to both human and artificial intelligence.
  • Cases where it's not possible to guarantee the result is correct, and the correct result matters. Here A.I. would be a serious problem, and humans will quickly learn they can't trust A.I. (in the same way that humans don't really trust each other, but without the way humans will accept/tolerate the mistakes other humans make).
  • Cases where it's not possible to guarantee the result is correct, and the correct result doesn't matter. Here A.I. is useful, but it's limited to things like predicting the weather (nobody expects the weather to be right), doing quality control on a production line (a few false positives or false negatives are "acceptable"), etc.
Case 1: Is there a mistake in your wording there? If you mean it the way it's stated, it's barking.

Cace 2: The A.I. would recognise that it can't guarantee a correct result and put things in terms of probabilities. Its probabilities would be as good as or better than the ones that we could supply, so following the advice of the A.I. would be safer than just going with the best hunch of a troup of monkeys.

Case 3: Again A.I. would give more accurate probabilities.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Rusky wrote:
Brendan wrote:You can't have intelligence without fallibility
This is pretty connected to what I was saying. He seems to believe in the possibility of well-defined, infallible intelligence, which is a technical impossibility (what I'm saying), ...
Technical impossibility maybe, but if multiple systems agree it isn't likely that they've all made exactly the same errors. It could happen, of course, just as a cow could suddenly drift upwards into the sky without being pulled back down by gravity (theoretically possible, but so unlikely that we can consider it to be impossible) - you can rerun the calculations many times and on many different systems which were independently designed and get to the point where the odds are similar to the flying cow.
...and using fallible AI for what he wants wouldn't help (what you're saying).
Whereas it goes without saying that monkey voting will always produce better results.

_______________________________________________________________________________
Jvac wrote:Reasons for my opinion is that machines are unable to learn, ...
Do you think machine learning has never been demonstrated? It has been done - we just don't yet have an advanced enough system to do the whole job as well as we do, but there's no barrier in the way of that beyond putting in the work required to get there.
...to reason...
What's to stop them? You can take a syllogism, write a program based on it, feed in some data and watch it reason correctly. X is a kind of Y. Y is a kind of Z. Therefore X is a kind of Z. Now create some data of the form A is a kind of B and watch it generate some new data by applying simple reasoning. Feed in "a brambling is a kind of finch" and "a finch is a kind of bird", and out will pop a brand new piece of data which you didn't feed into the machine: a brambling is a kind of bird.
...and eventually to have consciousness.
Which it would need why?
It took nature million years to develop our tools and our culture, therefore I think we are probably millions of years away from robots to taking over the world.
Just as it took millions of years for programmers to create a calculator that could match the arithmetical capabilities of humans.
AI is possible in theory but impossible in practice.
Some people sure do give up easily.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Brendan wrote:If there's an accident involving 2 cars driven by A.I. then you blame A.I. If there's an accident involving a car driven by a human and a car driven by an A.I., then you blame the A.I. If there's an accident involving 2 cars driven by humans then that's "normal" (accidents happen). A.I. will get 90% of the blame for 10% of the accidents.
The A.I. will give a full, precise account of what happened and all the decisions it made. If it made an error, a fault in the software will have been shown up and the system will need to be fixed. If there was no error, it will either be demonstrated either that the human driver of the other car was to blame or that the accident may just have been down to bad luck because improbable events coincided in the way that they inevitably do on occasions. A certain amount of risk is inevitable if cars are to move at sufficient speed to be useful.
A.I. would have to be at least 10 times better than humans before humans start to actually believe A.I is equal; and then people are going to look at the price difference and won't buy A.I. cars anyway.
The cost of driving insurance will make it massively cheaper to let A.I. drive your car, and when the right algorithms have been worked out they'll be able to chuck the lidar and the bank of computers in the back - a pair of cheap webcams will provide better vision than many human drivers have, and a single chip may be all that's required to do all the processing.
Now imagine you're a car manufacturer - would you want to pioneer this technology, given how hard it's going to be to market the cars after you've spent a huge amount of $$ on research and development? Are you going to want to be the first car manufacturer in history to be sued for damages caused by "driver error"? Why on earth would any car manufacturer want all the financial risks when they can make just as much profit doing what they've always done?
Each machine would be insured just like a car driven by a human, but the insurance will be lower. If a fault in the program causes a crash and a death, the insurance will cover the cost just as happens with human drivers, but the cost of insurance will not go up for the machine-driven cars as it would be a freak error which would occur no more frequently in the future than it did in the past, and less often once the fault in the program has been corrected.
Now imagine a car that follows a system of rules. You can get maps and do pathfinding; and make it so that it takes into account things like traffic light information, weather conditions, traffic congestion, etc. You can have a sonar/radar system and track the position and trajectory of nearby objects and use that to limit speed. You can follow established rules for intersections, over-taking, etc. Best of all, you could guarantee that if there's an accident it wasn't the computer's fault. For a similar amount of research and development and a cheaper price tag for the final car, you could do a lot better than A.I. with no intelligence at all.
It would be inferior - it would be more likely to mow down any child that runs out onto the road in front of it because the child would be breaking its rules of combat. A human or A.I. driver would recognise the unpredictability of a specific child who doesn't look as if he/she is concentrating on the danger and would slow down). If it has to choose between running down a child who's run out onto the road or swerving onto the pavement [turn that word into "sidewalk" if you think in American English] to avoid that child and mow down a known paedophile instead, it will swerve onto the pavement. Your solution is expensive and unsafe, but it may still be safer than allowing humans to drive, and might save money as a result. It may also be a useful step to go through before A.I. is up to the task, but it will be a very temporary step.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

I think we're done here.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Rusky wrote:I think we're done here.
I'm not trying to upset you, or anyone else. The people who gain the most out of discussions like this are the ones who are wrong about things, provided of course that they take on board the fact that they are wrong: being shown to be wrong is absolutely the best thing that can happen to you in a discussion. I love it when it happens to me, and always thank the person who put me right, but that can only happen when I'm wrong, and it isn't likely to happen on this subject for the simple reason that I actually do know what I'm talking about. If I wasn't genuinely doing this A.I. stuff, I'd have tripped over on something a long time ago.

[There are in fact a few things I've said which are technically wrong - I've simplified a few things for reasons of economy, but if anyone ever picks up on any of those things I'm ready to jump into another level of depth to show how things would actually be done. One example of this is where I said you would have a preprocessor that turns sound into a string of words before passing that on to the system that analyses it for meaning, but there would actually be several levels of the process which interact in both directions with each other in order to eliminate many alternative interpretations of the data as early as possible so as to minimise the amount of data that needs to be sent onwards.]
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
gravaera
Member
Member
Posts: 737
Joined: Tue Jun 02, 2009 4:35 pm
Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.

Re: Will A.I. Take Over The World!

Post by gravaera »

I need to learn how to make these (very rough) analogies, seems like they're srsly pro :(
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

gravaera wrote:I need to learn how to make these (very rough) analogies, seems like they're srsly pro :(
I suppose you're right not to be impressed - all this work just to drag some geniuses kicking and screaming through a basic overview. I suppose I could be a fake, perhaps acting as a decoy to draw attention away from someone who's doing the project for real. Yes, I like that idea, but you can think what you like.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Will A.I. Take Over The World!

Post by Brendan »

Hi,
DavidCooper wrote:
Brendan wrote:If there's an accident involving 2 cars driven by A.I. then you blame A.I. If there's an accident involving a car driven by a human and a car driven by an A.I., then you blame the A.I. If there's an accident involving 2 cars driven by humans then that's "normal" (accidents happen). A.I. will get 90% of the blame for 10% of the accidents.
The A.I. will give a full, precise account of what happened and all the decisions it made. If it made an error, a fault in the software will have been shown up and the system will need to be fixed.
You can't fix something that learns by itself. You won't even be able to figure out which RAM locations are being used to store what. It'd be like trying to "fix" a person that stutters by isolating the neurons that are responsible for the stuttering and modifying just those neurons.
DavidCooper wrote:If there was no error, it will either be demonstrated either that the human driver of the other car was to blame or that the accident may just have been down to bad luck because improbable events coincided in the way that they inevitably do on occasions. A certain amount of risk is inevitable if cars are to move at sufficient speed to be useful.
When A.I. cars are first introduced (the early models with the teething problems) there will naturally be a lot of media attention involved. Any accident involving an A.I. car will be all over the news. A.I. cars will be banned by politicians 6 months after the same politicians allow it. I'm not saying A.I. cars can't be better than humans, I'm saying that it won't matter if A.I. cars are better than humans or not due to the way bad publicity works.
DavidCooper wrote:
A.I. would have to be at least 10 times better than humans before humans start to actually believe A.I is equal; and then people are going to look at the price difference and won't buy A.I. cars anyway.
The cost of driving insurance will make it massively cheaper to let A.I. drive your car, and when the right algorithms have been worked out they'll be able to chuck the lidar and the bank of computers in the back - a pair of cheap webcams will provide better vision than many human drivers have, and a single chip may be all that's required to do all the processing.
Sure, and "a few hundred supercomputers of processing" will be so cheap and so small that people will receive them free in their boxes of breakfast cereal. More likely is that it will add $500,000 to the price of a $30,000 sedan and will halve the passenger space. Of course theft will be a massive problem too.

Note: A pair of cheap webcams get's you a field of view of about 70 degrees with depth perception. You'd need about 8 pairs to avoid blind spots.
DavidCooper wrote:
Now imagine you're a car manufacturer - would you want to pioneer this technology, given how hard it's going to be to market the cars after you've spent a huge amount of $$ on research and development? Are you going to want to be the first car manufacturer in history to be sued for damages caused by "driver error"? Why on earth would any car manufacturer want all the financial risks when they can make just as much profit doing what they've always done?
Each machine would be insured just like a car driven by a human, but the insurance will be lower.
You're right - rather than being sued by individual car owners and victims, the car manufacturer will be sued by teams of highly paid lawyers working for insurance companies. I'm sure that will make car manufacturers feel better about being liable for "driver error".

More likely is that the car manufacturer will have to add another $250,000 to cover risk/payouts to the price of their $530,000 sedans (to avoid bankruptcy). The cost of insurance depends on the chance of needing the insurance *and* the cost of damages, so less accidents would reduce the cost of insurance and more damages (due to the much higher cost of the vehicles involved in the crashes) would increase the cost of insurance; and after both adjustments consumers will end up paying more for insuring their car and not less (but that's only insuring against collisions, and ignores the cost of insuring a $780,000 car against theft).
DavidCooper wrote:
Now imagine a car that follows a system of rules. You can get maps and do pathfinding; and make it so that it takes into account things like traffic light information, weather conditions, traffic congestion, etc. You can have a sonar/radar system and track the position and trajectory of nearby objects and use that to limit speed. You can follow established rules for intersections, over-taking, etc. Best of all, you could guarantee that if there's an accident it wasn't the computer's fault. For a similar amount of research and development and a cheaper price tag for the final car, you could do a lot better than A.I. with no intelligence at all.
It would be inferior - it would be more likely to mow down any child that runs out onto the road in front of it because the child would be breaking its rules of combat.
No. It'd track the trajectory of nearby objects (including small children) and determine how quickly each of these objects could change direction and get in the path of the car. It'd be guaranteed accident-proof. The downside is that you'd probably need fences along footpaths/pavements/sidewalks so that the cars would be able to move at a decent speed.

It wouldn't start out as a "beginner" that crashes into things until it learns how to avoid collisions, like something intelligent would.
DavidCooper wrote:A human or A.I. driver would recognise the unpredictability of a specific child who doesn't look as if he/she is concentrating on the danger and would slow down). If it has to choose between running down a child who's run out onto the road or swerving onto the pavement [turn that word into "sidewalk" if you think in American English] to avoid that child and mow down a known paedophile instead, it will swerve onto the pavement. Your solution is expensive and unsafe, but it may still be safer than allowing humans to drive, and might save money as a result. It may also be a useful step to go through before A.I. is up to the task, but it will be a very temporary step.
For my solution, the car would drop its speed so much that it never has to choose between victims.

How did your cheap webcams suddenly get so good that they can do facial recognition from a few hundred meters away and "know" who it's potential victims are? How did your car manage to avoid being banned when everyone found out it makes grossly unethical judgements about the worth of individual people?

To be honest, I think you're suffering a "grass is always greener" delusion. Reality can't limit your unfounded optimism until the technology actually exists; and because the technology will never exist your fantasy will continue to expand without constraints. Eventually you'll get annoyed that the technology doesn't exist and start blaming people for failing to invent it. In the long term you'll probably become a very unstable individual - maybe someone who sends death threats to researchers and government officials before finally attempting a murder-suicide. ;)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Will A.I. Take Over The World!

Post by Rusky »

Not to side with the hopelessly optimistic egoist, but...
Brendan wrote:it won't matter if A.I. cars are better than humans or not due to the way bad publicity works.
AI cars already work and drive on real roads.
Brendan wrote:It wouldn't start out as a "beginner" that crashes into things until it learns how to avoid collisions, like something intelligent would.
Just because intelligence is "configured" by learning doesn't mean it has to be used in production during that process.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Will A.I. Take Over The World!

Post by Brendan »

Hi,
Rusky wrote:Not to side with the hopelessly optimistic egoist, but...
Brendan wrote:it won't matter if A.I. cars are better than humans or not due to the way bad publicity works.
AI cars already work and drive on real roads.
Do you have any references to back this up (other than some rare research projects that haven't/won't make it to mass production, where a human has to sit and monitor the A.I. and correct it "just in case")?
Rusky wrote:
Brendan wrote:It wouldn't start out as a "beginner" that crashes into things until it learns how to avoid collisions, like something intelligent would.
Just because intelligence is "configured" by learning doesn't mean it has to be used in production during that process.
So it's a "system that follows a fixed set of rules" (where the fixed set of rules may have been generated by A.I. at the factory) and not A.I. at all?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Brendan wrote:You can't fix something that learns by itself. You won't even be able to figure out which RAM locations are being used to store what. It'd be like trying to "fix" a person that stutters by isolating the neurons that are responsible for the stuttering and modifying just those neurons.
The system will point you to all the data it's stored - it isn't like human brains where if you ask us we don't have a clue where our own memories have been put or what form they're in. There could be difficulty tracking down a rare bug of course, but the system will be intelligent enough to review its own code hunting for it.
When A.I. cars are first introduced (the early models with the teething problems) there will naturally be a lot of media attention involved. Any accident involving an A.I. car will be all over the news. A.I. cars will be banned by politicians 6 months after the same politicians allow it. I'm not saying A.I. cars can't be better than humans, I'm saying that it won't matter if A.I. cars are better than humans or not due to the way bad publicity works.
So long as you have a clear improvement in safety, people will see the sense in no longer allowing human drivers. There are so many bad drivers out there causing death by dangerous driving that there will be a pressure to ban human drivers outright as soon as the economics of safe self-driving cars is right.
DavidCooper wrote:Sure, and "a few hundred supercomputers of processing" will be so cheap and so small that people will receive them free in their boxes of breakfast cereal. More likely is that it will add $500,000 to the price of a $30,000 sedan and will halve the passenger space. Of course theft will be a massive problem too.
Like a hoverfly needs a few hundred supercomputers to pull off all the stunts that it is capable of. The algorithms they're using at the moment in self-driving cars are clearly a long way from being optimal.
a pair of cheap webcams will provide better vision than many human drivers have
Note: A pair of cheap webcams get's you a field of view of about 70 degrees with depth perception. You'd need about 8 pairs to avoid blind spots.
I agree that you'd want more than just two webcams, but note that I said "than many human drivers". A one-eyed driver doesn't have the depth perception, and the only eyesight test that's done in Britain for drivers is to ask them to read a number plate which is near enough for a webcam to resolve - they don't check for tunnel vision.
You're right - rather than being sued by individual car owners and victims, the car manufacturer will be sued by teams of highly paid lawyers working for insurance companies. I'm sure that will make car manufacturers feel better about being liable for "driver error".
The owner of the car should take out the insurance - the actual risk is proportional to how much they use the car, and it is by them using the car that the risk is being caused. That's a point that I'm not sure about, but if it is the car manufacturer that has to pay - actually, if its the car manufacturer that pays, insurance isn't going to be necessary at all because it'll be cheaper for them just to pick up the bill directly and miss out the middle man so that he doesn't take a cut.
More likely is that the car manufacturer will have to add another $250,000 to cover risk/payouts to the price of their $530,000 sedans (to avoid bankruptcy). The cost of insurance depends on the chance of needing the insurance *and* the cost of damages, so less accidents would reduce the cost of insurance and more damages (due to the much higher cost of the vehicles involved in the crashes) would increase the cost of insurance; and after both adjustments consumers will end up paying more for insuring their car and not less (but that's only insuring against collisions, and ignores the cost of insuring a $780,000 car against theft).
They wouldn't release the cars unless they're safe enough for the economics to work, but that shouldn't require them to be much safer than human drivers. If the car costs more because the insurance costs are built in, they'll get that back through not having to be insured, but the real insurance costs today should be a tiny fraction of what they actually are because people are fleecing the system by staging accidents and claiming a fortune for faked injuries which are hard for doctors to disprove. Self-driving cars will eliminate that kind of fraud.
No. It'd track the trajectory of nearby objects (including small children) and determine how quickly each of these objects could change direction and get in the path of the car. It'd be guaranteed accident-proof. The downside is that you'd probably need fences along footpaths/pavements/sidewalks so that the cars would be able to move at a decent speed.
And then you have a huge problem letting people cross the road. You'd have to have gaps for them where the cars would slow to a crawl and the distance between these gaps would have to be reasonable, so the cars would forever be speeding up and slowing down. The key thing when driving past a child is to judge whether they look as if they're likely to run out in front of you or not, and human drivers are often able to tell. Still, you could add a little bit of A.I. to your system to do that kind of thing as soon as it's possible without needing to wait for everything else to be ready.
It wouldn't start out as a "beginner" that crashes into things until it learns how to avoid collisions, like something intelligent would.
An A.I. driven system wouldn't start out as a beginner either - it would be tested in all manner of simulated situations first.
For my solution, the car would drop its speed so much that it never has to choose between victims.
Sounds slow - people travelling in cars aren't going to accept having to slow to a crawl every time there's someone by the road that could step out in front of them. The child might suddenly run away from the paedophile onto the road, and I know where I'd like the car to swerve.
How did your cheap webcams suddenly get so good that they can do facial recognition from a few hundred meters away and "know" who it's potential victims are? How did your car manage to avoid being banned when everyone found out it makes grossly unethical judgements about the worth of individual people?
It wouldn't have to do facial recognition. A.I. will be tracking everyone in built up areas in order to eliminate crime and will inform the car of who it's passing so that it knows exactly who to run down if it has to make a horrible choice of that kind.
To be honest, I think you're suffering a "grass is always greener" delusion. Reality can't limit your unfounded optimism until the technology actually exists; and because the technology will never exist your fantasy will continue to expand without constraints. Eventually you'll get annoyed that the technology doesn't exist and start blaming people for failing to invent it. In the long term you'll probably become a very unstable individual - maybe someone who sends death threats to researchers and government officials before finally attempting a murder-suicide. ;)
I won't need to blame anyone but myself - I can make this happen without needing anyone else, because once the A.I. is up and running, it will improve itself and it will take over by dint of making the world safer in a multiplicity of ways - people will demand it.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Will A.I. Take Over The World!

Post by Solar »

Brendan wrote:Do you have any references to back this up (other than some rare research projects that haven't/won't make it to mass production, where a human has to sit and monitor the A.I. and correct it "just in case")?
Just in case you're referring to the EUREKA / Prometheus project, that wasn't A.I. at all.
Every good solution is obvious once you've found it.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Will A.I. Take Over The World!

Post by DavidCooper »

Brendan wrote:
Rusky wrote: AI cars already work and drive on real roads.
Do you have any references to back this up (other than some rare research projects that haven't/won't make it to mass production, where a human has to sit and monitor the A.I. and correct it "just in case")?
I think he's talking about the Google Car project where a human does indeed sit monitoring it just in case, but reports I've seen say that they've got to the point where they don't need to step in to correct anything and that they've been shown to be safer than human drivers already. I don't know if that applies to a restricted set of road types. (There's also a project which has a rally car that is claimed to drive at close to race speed without a human driver, though I haven't seen a report on that that I'd trust either - just science items in the news. That's not directly relevant to the point, but it indicates where things are getting in terms of the ability of these systems.)
Rusky wrote:
Brendan wrote:It wouldn't start out as a "beginner" that crashes into things until it learns how to avoid collisions, like something intelligent would.
Just because intelligence is "configured" by learning doesn't mean it has to be used in production during that process.
So it's a "system that follows a fixed set of rules" (where the fixed set of rules may have been generated by A.I. at the factory) and not A.I. at all?
Rusky is right in what he's saying - it needn't learn anything once it's reached the point where it's safe enough to go out on the roads, but that doesn't ban it from learning more. The A.I. in these cars could learn from new, rare situations which haven't been thought through before, and what is learned from that can be sent to every other car so that they all learn from it right around the world within seconds.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
Post Reply