berkus wrote:I won't summarize it here, though it can be done in one sentence. The story is too good and much deeper than one sentence could ever express. It is about a genuine threat, I can assure you. Either way go and read it, it's very well worth it.
I've added it to my list of books to give a go some day, but for now I'd prefer to see the one-sentence version, if it doesn't spoil too much of the plot.
_______________________________________________________________________________
Combuster wrote:Rusky wrote:Dystopian novels are generally not the way to predict the real world.
Yet they have been good at it. See 1984.
Without A.I. taking charge, 1984 is exactly what you're going to end up with.
_______________________________________________________________________________
Solar wrote:I see the flaws in humans and human-made machines and believe that anything that we could come up with - not "we" as in "the smart people capable of dreaming up things" but as in "the people with the money and power to actually build things for profit" - will be likewise flawed.
That possibility worries me too - with the wrong kind of A.I. these things could be designed to favour some people over others, and that's going to be a particular danger with military robots which could be designed to carry out genocide. However, it should not be to your disadvantage to have military robots with proper morals built into them, and if all sides had them they wouldn't even bother shooting at each other because they'd effectively be on the same side. The people of this planet will collectively want peace and fairness to prevail, so money-making and abusive-power motivations will not win the day.
I am not afraid of intelligent machines. I am afraid of the hubris and incompetence of the people who will build them. And the blind-eyed gullibility of people believing that a human-build, human-configured, human-data-fed machine could be of super-human impartiability and justness.
A calculator is impartial. When it adds up a bill, it doesn't favour the seller by coming to a higher total than the sum of the items being bought, and nor does it favour the buyer by dropping a few dollars here and there. An intelligent system with biasses built into it might be desired by people who put their political beliefs before reason, but if you program a bias into it to favour white people over blacks or even to try to exterminate blacks once it has taken over the world, it could bite you in the butt - your children might marry people who aren't genetically pure white and your grandchildren could be exterminated by the machines you programmed. Anyone building biasses into a system is a fool. The aim has to be to create a system which builds up all its judgements from first principles.
Please, go study Sociology. You'll be surprised how little an "impartial judge" would help with removing the cause of conflicts.(*)
It's never been done. People on both sides inevitably assume the judge is biassed against them, but that would not be the case with a machine which was working based entirely on open rules which leave no room for bias to get in other than through lack of information on the thing being judged. If all the data available is fed into the system, a judgement (or whole set of judgements) can then be made based on the best possible analysis of that data, making it possible for most people to accept the decisions and move on. The people who will continue to make a fuss will be the terrorists (many of them members of governments), but a full anlysis of all their crimes will be laid out for all to see and they will be put in prison where they belong for the rest of their lives. There will be no hiding place for such people, and they will no longer be protected by the governments they used to work for.
In the end, it doesn't matter. I hereby bet you that such "AI arbiters" that are actually considered a human's equal in competence so that they actually could replace humans won't see the day of light within my lifetime as an active software developer (that'd be another 30-35 years) - simply because there's little money to be made in developing them.
The cost of wars, disputes, corruption and other crime is astronomical - there should be plenty of money available to reward the development of permanent solutions to these problems, but there's a much bigger motivation which is simply to create a better world. I want to live in a world where people I care about are safe, and I care about all seven billion of them, minus how ever many bad ones there are who cause all the rest endless grief (though even with them I want to be sure that they'll be treated fairly) - to create such a system would be ample reward in itself, and well worth dedicating your life to.
_______________________________________________________________________________
Rusky wrote:Yes- for 55 years, the AI people have been barking up the wrong tree. They win at chess by throwing massive computational power at the problem. Things like Google and Watson do a little better by taking advantage of a massive database combined with actual pattern matching, but they're still mostly just brute-forcing things.
That is very true. People's ability to apply reason is generally poor, their knowledge is often abysmal, and yet they can still manage to do amazing things despite the lack of processing they apply to problems. It looks as if it should be dead easy to create an intelligent system, and yet it isn't - anyone who tries it instantly gets bogged down in complex linguistics for decades. A massive database and brute force will not crack the nut - you have to do the hard work of untangling the mess of concepts which we work with in our heads, and that's what all the serious contenders in the race are now set on doing. At least one contender has finished that task.
_______________________________________________________________________________
Solar wrote:Whatever capabilities that "AI judge" might have, they are lacking the background of a human judge, and the results will only as long be "the same" as the guesses and provisions of the engineers that did build the "AI judge" do hold true.
If an alien species came here with the intention of running the world in a moral way, they would study people to find out what makes them tick, they would learn which things they don't like having done to them (e.g. being killed) and how much they dislike these things, and then they'd sort out a proper system of laws to apply to the humans which would then be enforced so that the humans would very quickly learn to stop abusing each other. This could be done without the aliens being humans, and it isn't hard because the way morality works is universal - it's just a matter of minimising harm (though expressly not by means of eliminating all the things that are capable of being harmed). A machine could do the same job as the aliens, though it would also do it far better because it wouldn't make all the mistakes that natural intelligent systems do.
No, it is not just the results that count, it is the trust you can put into the next decision not being some freak malfunction.
How many freak malfunctions occur in judgements the way they're made now? How many freak malfunctions will you get with three independently-designed A.I. judges making decisions where they should always agree with each other?