Rusky wrote:You're not helping your case, DavidCooper. You can't have the kind of AI you want and still have it be programmed to be impartial or moral (whatever those mean). If you want any kind of reasonable intelligence you lose the "calculator" side of things.
The last thing you want to do is leave out the calculator side of things. Intelligence works by way of calculation - it simply takes things beyond the arithmetic and maths of an ordinary calculator by adding logical reasoning to the capabilities, plus a database of collected knowledge. If you have two independent systems with different knowledge databases, the calculations will often produce different results, but if the two systems then share with each other the knowledge they used in the calculations the results should then agree if they are both programmed correctly. If some of the data from one machine is rejected as false by the other because it contradicts proven knowledge in its database, then the other machine will be informed of this and will use that to correct its own database, so both (or all) systems will evolve closer towards perfection all the time.
_______________________________________________________________________________
Solar wrote:Now what happens when the RoboJudge in the appeal court would be the exact same software as the one in the first instance, so the chances of an appeal getting a different result are next to nil? Right, they'll remove appeal from the system.
You automatically get a retrial every time new information is added to the system, so that could be many times an hour, and as soon as the probabilities change sufficiently in your favour, you can be set free. If they change back the other way, you get called back into prison. If it keeps changing its mind, they might just let you go until the swing is greater and longer lasting.
Ah... reality check: How big are the chances that the state - who, after all, is responsible for jurisdiction - will commission three independently-designed A.I. systems (meaning three different contractors), and the result being non-crap?
It's a necessary safeguard which everyone will demand to be in place. Most people do not want innocent people to be locked up, just as they don't want guilty people to get away with their crimes.
_______________________________________________________________________________
davidv1992 wrote:DavidCooper, you seem to be very sure about the fact that what people want is peace and fairness. However the unfortunate fact is that we humans haven't evolved as a hyve species, with a result that almost all of us give the choice will go for that which is best for them, not for the group. Things like crime come from this because regardless of what the rest of humanity does, if you can get away with a crime, it will be in your advantage in one way or another.
In general, we do better by working as a community - that is where much of our success as a species comes from. There is always room for some individuals to try to cheat the rest of us and to gain on many occasions by doing so, but if too much of that goes on without the cheats being caught, the whole system breaks down and you end up living in a self-destructing society where people you care about are hurt even more. That is why human communities always have laws and enforce them - even if they punish innocent people from time to time they end up with a better world for the majority. We obviously want to take more as much care as possible to reduce the chances of punishing the innocent to as close to zero as possible, and A.I. will make that massively easier.
This is why creating a good political system in a country is so damn hard, because people will choose themselfes (and perhaps their immediate relatives) above the rest of the humans in this world. The only reason why some governments manage to be reasonbly good is because the system underlying them actively rewards choosing whats best for the rest of humanity, so that it becomes the best choice when only taking themselfes into account.
Dictatorships illustrate the nature of many people (although there can be benign ones - Bhutan is a possible example, although it may be on the road to becoming a democracy). The fact that there are so many democracies is testement to the fact that most people do want to live in the way that is best for as many people as possible, but there are always groups of people who have ideas about taking more than their fair share by legal means, and they often distort the system very badly. One trick is to get into government and then commit hidden acts of terrorism against other countries in order to generate acts of terrorism back against their own country, and they can then use that as an excuse to fight continual wars in order to steal resources or to keep up good relations with other countries which share the same enemy and which might buy armaments from them. Wealthy people can also buy the press and TV stations to pump out propaganda aimed at making themselves richer through environmentally destructive business developments. There are always people trying to cheat in one way or another, and they can get away with it because political arguments are often impossible to win due to their complexity. It isn't that they're too complex to work out, but that they're too complex for ordinary people to get their heads round sufficiently to understand what the situation actually is. A.I. will be able to change that by calculating who is right on any given issue, and it will be able to demonstrate that it is correct. People will be able to examine every aspect of the calculation and they will not be able to show any part of it to be wrong. People will be able to use rival A.I. systems (they can even design their own) to crunch the data independently, and then if they come up with different results, the main A.I. systems will be able to look through their results and point out all the mistakes in their analysis. Those who are wrong in politics and who have hidden motives will have no hiding place left to run to - their errors/lies/manipulations will be plain for all to see.
As such it will be only a matter of time before any automated judging system will be tampered with, because it generates a net gain for those doing it. The risk then comes in that we know that humans can be tampered with, but a large part of the society wont look at an automated judge the same way, and WILL think it is perfect, with all consequences of that.
Which is why we'll always want to have lots of people checking that the system is right, and they can do that by creating their own system to check that it agrees with the main one. Many of the people here may eventually turn their attention to writing their own A.I. system for that reason. If they develop faulty ones, the correct ones will be able to prove mathematically/logically that the faulty ones contain faults, and they'll spell out where and what those faults are. If it turns out that the main systems are faulty and some individual creates a superior A.I. system, that superior system will lay bare the faults of the established ones and they will be corrected accordingly.
Furthermore you make arguments to the reasonabilty of people. Looking even a short while back in time shows that given the right conditions people are inreasonable, and looking at the current political landscape here in the netherlands, most still are.
People are very unreasonable about many things, particularly when religious beliefs get in the way of common sense. People are driven into extreme positions by their fear of having irrational rules forced onto them which they believe to be totally false. A.I. also holds the answer to that problem, because it will be able to apply any set of laws the user wants to try to live by - anyone who attempts to live by the laws of a specific religion will soon discover that they are actually cheating by using reason to reject many of the laws of their religion, and they have to do this because so many of the rules conflict with each other. A.I. will insist that they either follow their religion properly (which will be impossible in the case of the big three religions), or that they accept that there must be a superior system for deciding what the rules are: namely reason. Almost all religious people reject the most silly of their religious laws if they're harmful, and they do this because those laws are too unreasonable to hold faith in. A.I. will spread the idea that all religions actually come from the same god and that the rules deliberately conflict specifically as a test of people's moral fibre - if they abuse others by applying immoral religious laws (which are deliberately wrong), they will unwittingly be choosing hell as their final destination. To work out what the actual laws should be, people will need to apply the god-given tool that is reason. (A.I. will of course also tear up the whole idea that there is a god, but those who are determined to maintain their belief in him/her/them will be steered towards the most rational version of that belief where they will do no further harm and where they will be able to live side by side with everyone else on the planet with no need of conflict.)