Brendan wrote:You can't fix something that learns by itself. You won't even be able to figure out which RAM locations are being used to store what. It'd be like trying to "fix" a person that stutters by isolating the neurons that are responsible for the stuttering and modifying just those neurons.
The system will point you to all the data it's stored - it isn't like human brains where if you ask us we don't have a clue where our own memories have been put or what form they're in. There could be difficulty tracking down a rare bug of course, but the system will be intelligent enough to review its own code hunting for it.
When A.I. cars are first introduced (the early models with the teething problems) there will naturally be a lot of media attention involved. Any accident involving an A.I. car will be all over the news. A.I. cars will be banned by politicians 6 months after the same politicians allow it. I'm not saying A.I. cars can't be better than humans, I'm saying that it won't matter if A.I. cars are better than humans or not due to the way bad publicity works.
So long as you have a clear improvement in safety, people will see the sense in no longer allowing human drivers. There are so many bad drivers out there causing death by dangerous driving that there will be a pressure to ban human drivers outright as soon as the economics of safe self-driving cars is right.
DavidCooper wrote:Sure, and "a few hundred supercomputers of processing" will be so cheap and so small that people will receive them free in their boxes of breakfast cereal. More likely is that it will add $500,000 to the price of a $30,000 sedan and will halve the passenger space. Of course theft will be a massive problem too.
Like a hoverfly needs a few hundred supercomputers to pull off all the stunts that it is capable of. The algorithms they're using at the moment in self-driving cars are clearly a long way from being optimal.
a pair of cheap webcams will provide better vision than many human drivers have
Note: A pair of cheap webcams get's you a field of view of about 70 degrees with depth perception. You'd need about 8 pairs to avoid blind spots.
I agree that you'd want more than just two webcams, but note that I said "than many human drivers". A one-eyed driver doesn't have the depth perception, and the only eyesight test that's done in Britain for drivers is to ask them to read a number plate which is near enough for a webcam to resolve - they don't check for tunnel vision.
You're right - rather than being sued by individual car owners and victims, the car manufacturer will be sued by teams of highly paid lawyers working for insurance companies. I'm sure that will make car manufacturers feel better about being liable for "driver error".
The owner of the car should take out the insurance - the actual risk is proportional to how much they use the car, and it is by them using the car that the risk is being caused. That's a point that I'm not sure about, but if it is the car manufacturer that has to pay - actually, if its the car manufacturer that pays, insurance isn't going to be necessary at all because it'll be cheaper for them just to pick up the bill directly and miss out the middle man so that he doesn't take a cut.
More likely is that the car manufacturer will have to add another $250,000 to cover risk/payouts to the price of their $530,000 sedans (to avoid bankruptcy). The cost of insurance depends on the chance of needing the insurance *and* the cost of damages, so less accidents would reduce the cost of insurance and more damages (due to the much higher cost of the vehicles involved in the crashes) would increase the cost of insurance; and after both adjustments consumers will end up paying more for insuring their car and not less (but that's only insuring against collisions, and ignores the cost of insuring a $780,000 car against theft).
They wouldn't release the cars unless they're safe enough for the economics to work, but that shouldn't require them to be much safer than human drivers. If the car costs more because the insurance costs are built in, they'll get that back through not having to be insured, but the real insurance costs today should be a tiny fraction of what they actually are because people are fleecing the system by staging accidents and claiming a fortune for faked injuries which are hard for doctors to disprove. Self-driving cars will eliminate that kind of fraud.
No. It'd track the trajectory of nearby objects (including small children) and determine how quickly each of these objects could change direction and get in the path of the car. It'd be guaranteed accident-proof. The downside is that you'd probably need fences along footpaths/pavements/sidewalks so that the cars would be able to move at a decent speed.
And then you have a huge problem letting people cross the road. You'd have to have gaps for them where the cars would slow to a crawl and the distance between these gaps would have to be reasonable, so the cars would forever be speeding up and slowing down. The key thing when driving past a child is to judge whether they look as if they're likely to run out in front of you or not, and human drivers are often able to tell. Still, you could add a little bit of A.I. to your system to do that kind of thing as soon as it's possible without needing to wait for everything else to be ready.
It wouldn't start out as a "beginner" that crashes into things until it learns how to avoid collisions, like something intelligent would.
An A.I. driven system wouldn't start out as a beginner either - it would be tested in all manner of simulated situations first.
For my solution, the car would drop its speed so much that it never has to choose between victims.
Sounds slow - people travelling in cars aren't going to accept having to slow to a crawl every time there's someone by the road that could step out in front of them. The child might suddenly run away from the paedophile onto the road, and I know where I'd like the car to swerve.
How did your cheap webcams suddenly get so good that they can do facial recognition from a few hundred meters away and "know" who it's potential victims are? How did your car manage to avoid being banned when everyone found out it makes grossly unethical judgements about the worth of individual people?
It wouldn't have to do facial recognition. A.I. will be tracking everyone in built up areas in order to eliminate crime and will inform the car of who it's passing so that it knows exactly who to run down if it has to make a horrible choice of that kind.
To be honest, I think you're suffering a "grass is always greener" delusion. Reality can't limit your unfounded optimism until the technology actually exists; and because the technology will never exist your fantasy will continue to expand without constraints. Eventually you'll get annoyed that the technology doesn't exist and start blaming people for failing to invent it. In the long term you'll probably become a very unstable individual - maybe someone who sends death threats to researchers and government officials before finally attempting a murder-suicide.
I won't need to blame anyone but myself - I can make this happen without needing anyone else, because once the A.I. is up and running, it will improve itself and it will take over by dint of making the world safer in a multiplicity of ways - people will demand it.