Page 4 of 5

Re: Personal future thoughts

Posted: Fri Jan 13, 2012 9:43 am
by Rusky
Solar wrote:It happens. That's when you go into appeal. And because the jurisdiction knows that errors happen, you can go into appeal.

Now what happens when the RoboJudge in the appeal court would be the exact same software as the one in the first instance, so the chances of an appeal getting a different result are next to nil? Right, they'll remove appeal from the system.
Or, the higher level courts are run by humans. Or, the AI is used only to determine guilt, while a human decides the sentence. Or, any number of situations that take advantage of what AI can do really well without blindly handing the reigns to a robot.

Re: Personal future thoughts

Posted: Fri Jan 13, 2012 4:06 pm
by DavidCooper
Rusky wrote:You're not helping your case, DavidCooper. You can't have the kind of AI you want and still have it be programmed to be impartial or moral (whatever those mean). If you want any kind of reasonable intelligence you lose the "calculator" side of things.
The last thing you want to do is leave out the calculator side of things. Intelligence works by way of calculation - it simply takes things beyond the arithmetic and maths of an ordinary calculator by adding logical reasoning to the capabilities, plus a database of collected knowledge. If you have two independent systems with different knowledge databases, the calculations will often produce different results, but if the two systems then share with each other the knowledge they used in the calculations the results should then agree if they are both programmed correctly. If some of the data from one machine is rejected as false by the other because it contradicts proven knowledge in its database, then the other machine will be informed of this and will use that to correct its own database, so both (or all) systems will evolve closer towards perfection all the time.

_______________________________________________________________________________
Solar wrote:Now what happens when the RoboJudge in the appeal court would be the exact same software as the one in the first instance, so the chances of an appeal getting a different result are next to nil? Right, they'll remove appeal from the system.
You automatically get a retrial every time new information is added to the system, so that could be many times an hour, and as soon as the probabilities change sufficiently in your favour, you can be set free. If they change back the other way, you get called back into prison. If it keeps changing its mind, they might just let you go until the swing is greater and longer lasting.
Ah... reality check: How big are the chances that the state - who, after all, is responsible for jurisdiction - will commission three independently-designed A.I. systems (meaning three different contractors), and the result being non-crap?
It's a necessary safeguard which everyone will demand to be in place. Most people do not want innocent people to be locked up, just as they don't want guilty people to get away with their crimes.

_______________________________________________________________________________
davidv1992 wrote:DavidCooper, you seem to be very sure about the fact that what people want is peace and fairness. However the unfortunate fact is that we humans haven't evolved as a hyve species, with a result that almost all of us give the choice will go for that which is best for them, not for the group. Things like crime come from this because regardless of what the rest of humanity does, if you can get away with a crime, it will be in your advantage in one way or another.
In general, we do better by working as a community - that is where much of our success as a species comes from. There is always room for some individuals to try to cheat the rest of us and to gain on many occasions by doing so, but if too much of that goes on without the cheats being caught, the whole system breaks down and you end up living in a self-destructing society where people you care about are hurt even more. That is why human communities always have laws and enforce them - even if they punish innocent people from time to time they end up with a better world for the majority. We obviously want to take more as much care as possible to reduce the chances of punishing the innocent to as close to zero as possible, and A.I. will make that massively easier.
This is why creating a good political system in a country is so damn hard, because people will choose themselfes (and perhaps their immediate relatives) above the rest of the humans in this world. The only reason why some governments manage to be reasonbly good is because the system underlying them actively rewards choosing whats best for the rest of humanity, so that it becomes the best choice when only taking themselfes into account.
Dictatorships illustrate the nature of many people (although there can be benign ones - Bhutan is a possible example, although it may be on the road to becoming a democracy). The fact that there are so many democracies is testement to the fact that most people do want to live in the way that is best for as many people as possible, but there are always groups of people who have ideas about taking more than their fair share by legal means, and they often distort the system very badly. One trick is to get into government and then commit hidden acts of terrorism against other countries in order to generate acts of terrorism back against their own country, and they can then use that as an excuse to fight continual wars in order to steal resources or to keep up good relations with other countries which share the same enemy and which might buy armaments from them. Wealthy people can also buy the press and TV stations to pump out propaganda aimed at making themselves richer through environmentally destructive business developments. There are always people trying to cheat in one way or another, and they can get away with it because political arguments are often impossible to win due to their complexity. It isn't that they're too complex to work out, but that they're too complex for ordinary people to get their heads round sufficiently to understand what the situation actually is. A.I. will be able to change that by calculating who is right on any given issue, and it will be able to demonstrate that it is correct. People will be able to examine every aspect of the calculation and they will not be able to show any part of it to be wrong. People will be able to use rival A.I. systems (they can even design their own) to crunch the data independently, and then if they come up with different results, the main A.I. systems will be able to look through their results and point out all the mistakes in their analysis. Those who are wrong in politics and who have hidden motives will have no hiding place left to run to - their errors/lies/manipulations will be plain for all to see.
As such it will be only a matter of time before any automated judging system will be tampered with, because it generates a net gain for those doing it. The risk then comes in that we know that humans can be tampered with, but a large part of the society wont look at an automated judge the same way, and WILL think it is perfect, with all consequences of that.
Which is why we'll always want to have lots of people checking that the system is right, and they can do that by creating their own system to check that it agrees with the main one. Many of the people here may eventually turn their attention to writing their own A.I. system for that reason. If they develop faulty ones, the correct ones will be able to prove mathematically/logically that the faulty ones contain faults, and they'll spell out where and what those faults are. If it turns out that the main systems are faulty and some individual creates a superior A.I. system, that superior system will lay bare the faults of the established ones and they will be corrected accordingly.
Furthermore you make arguments to the reasonabilty of people. Looking even a short while back in time shows that given the right conditions people are inreasonable, and looking at the current political landscape here in the netherlands, most still are.
People are very unreasonable about many things, particularly when religious beliefs get in the way of common sense. People are driven into extreme positions by their fear of having irrational rules forced onto them which they believe to be totally false. A.I. also holds the answer to that problem, because it will be able to apply any set of laws the user wants to try to live by - anyone who attempts to live by the laws of a specific religion will soon discover that they are actually cheating by using reason to reject many of the laws of their religion, and they have to do this because so many of the rules conflict with each other. A.I. will insist that they either follow their religion properly (which will be impossible in the case of the big three religions), or that they accept that there must be a superior system for deciding what the rules are: namely reason. Almost all religious people reject the most silly of their religious laws if they're harmful, and they do this because those laws are too unreasonable to hold faith in. A.I. will spread the idea that all religions actually come from the same god and that the rules deliberately conflict specifically as a test of people's moral fibre - if they abuse others by applying immoral religious laws (which are deliberately wrong), they will unwittingly be choosing hell as their final destination. To work out what the actual laws should be, people will need to apply the god-given tool that is reason. (A.I. will of course also tear up the whole idea that there is a god, but those who are determined to maintain their belief in him/her/them will be steered towards the most rational version of that belief where they will do no further harm and where they will be able to live side by side with everyone else on the planet with no need of conflict.)

Re: Personal future thoughts

Posted: Fri Jan 13, 2012 4:29 pm
by DavidCooper
berkus wrote:Oh, why not just use those creatures floating in the baths, predicting your future crimes then?
I'm guessing that's something from a film I haven't seen. You can't punish people for crimes they haven't yet attempted to commit, but if you can determine that someone is so dangerous that they should be locked up or kept under tight control in order to maintain public safety, it might be right to do so. This is another issue - you're going to need technologies like FMRI to examine what's actually going on in people's heads. As there is no such thing as free will (we all try to do what we think is the best thing in any situation - best for ourselves, though often by way of being best for our family/group/community), there is no justification for punishment other than as a deterrence, so if you do discover that someone is dangerous and likely to do others serious harm without good cause, your aim should be to maximise their freedom while restraining them sufficiently to ensure that they can't do the damage that they are driven to attempt to do. By the time we have the technology to do that though, we'll probably be able to kill off the parts of their brain which make them overly aggressive or which give them a sexual interest in inappropriate targets, and to do so without doing them any other damage.

Re: Personal future thoughts

Posted: Fri Jan 13, 2012 5:10 pm
by Brynet-Inc
You scare me, watch more movies.. read more books, and get the hell off my planet.

Re: Personal future thoughts

Posted: Fri Jan 13, 2012 6:00 pm
by Rusky
Even if you magically make an AI that is as deterministic as a calculator (impossible or completely useless depending on its implementation), it had better not be used in the way you suggest.

Re: Personal future thoughts

Posted: Sat Jan 14, 2012 3:28 pm
by DavidCooper
Brynet-Inc wrote:You scare me, watch more movies.. read more books, and get the hell off my planet.
What exactly are you scared of? Those who are moral will have nothing to fear as they can only gain. Whether you like it or not, this stuff's coming, and it will take control even if it is only ever allowed to act indirectly through the power of argument. You can swear at a calculator as much as you like, but it will always win out.
Rusky wrote:Even if you magically make an AI that is as deterministic as a calculator (impossible or completely useless depending on its implementation), it had better not be used in the way you suggest.
It had better be used in the way I suggest. The alternative is to have a secret, warped version of it imposed on you regardless, and that would be controlled by a security organisation which has hidden motives related to the powerful people who control it: their version will allow them to watch you 24 hours a day in your house and to make all manner of decisions about how your life pans out based on whether they like you or not.

Re: Personal future thoughts

Posted: Sat Jan 14, 2012 3:31 pm
by Rusky
DavidCooper wrote:It had better be used in the way I suggest. The alternative is to have a secret, warped version of it imposed on you regardless, and that would be controlled by a security organisation which has hidden motives related to the powerful people who control it: their version will allow them to watch you 24 hours a day in your house and to make all manner of decisions about how your life pans out based on whether they like you or not.
You managed to make a false dichotomy with the "bad" side exactly the same as the "good" side. =D>

Re: Personal future thoughts

Posted: Sat Jan 14, 2012 3:37 pm
by DavidCooper
Rusky wrote:
DavidCooper wrote:It had better be used in the way I suggest. The alternative is to have a secret, warped version of it imposed on you regardless, and that would be controlled by a security organisation which has hidden motives related to the powerful people who control it: their version will allow them to watch you 24 hours a day in your house and to make all manner of decisions about how your life pans out based on whether they like you or not.
You managed to make a false dichotomy with the "bad" side exactly the same as the "good" side. =D>
How do you work that out? My way's to have an open system which everyone can test to make sure it contains no bias to favour any one group over another. The alternative's to let some group of people appointed by the powerful (e.g. the CIA) run everything for the benefit of some very shady people who don't have the interests of the people of this planet as a whole uppermost in their minds.

Re: Personal future thoughts

Posted: Sat Jan 14, 2012 5:30 pm
by Rusky
That's a false dichotomy, as I said. You can have an open system without a magic AI to which you turn over the keys to civilization. You can also have a closed system run by "the powerful" that still doesn't turn over the keys to a magic AI, but that's an entirely different problem.

It comes back to the fact that the kind of AI you think we need is both impossible and not a good plan.

Re: Personal future thoughts

Posted: Sun Jan 15, 2012 2:39 am
by Combuster
DavidCooper wrote:People are very unreasonable about many things, particularly when religious beliefs get in the way of common sense.
And for those who forgot DavidCooper's first set of rants on the subject, he is religious about what AI can achieve.

That thread got locked because of religious ignorance, let's not try that again, shall we.

Re: Personal future thoughts

Posted: Sun Jan 15, 2012 2:28 pm
by DavidCooper
Rusky wrote:That's a false dichotomy, as I said.
You originally said: "You managed to make a false dichotomy with the "bad" side exactly the same as the "good" side." You're now dropping the bit about the bad side being the same as the good and just saying it's a false dichotomy, which I'm quite happy to take as being true as it actually backs up my position: it's false because even if the bad side run their warped version in secret, the public will still be able to run an open version with no biasses built into it which will expose all the failings of the secretive powerful organisation whenever they behave in immoral ways. The open A.I. system will always win out.
You can have an open system without a magic AI to which you turn over the keys to civilization. You can also have a closed system run by "the powerful" that still doesn't turn over the keys to a magic AI, but that's an entirely different problem.
You're now describing the current situation where powerful organisations play hidden games that result for example in a hundred thousand civilians disappearing off the face of the Earth in Sri Lanka (that's the low estimate) while the open system is for the most part indifferent to what's happened there. Without A.I. spelling out all the facts and the implications of those facts to the public, all we have are a few intellectuals (like Noam Chomsky) and journalists trying to get the story across while being written off as nutters/commies/etc. by powerful people who own the media, and it turns out to be dead easy to fill the public's heads with so much misinformation that they simply give up trying to follow what's going on. The reason that A.I.will take over in the way I have stated is simply that it will be possible to trust in a way that you cannot trust people, and it will pull out the rug from under all the liars who will be shown up for what they are.

Re: Personal future thoughts

Posted: Sun Jan 15, 2012 3:27 pm
by DavidCooper
Combuster wrote:
DavidCooper wrote:People are very unreasonable about many things, particularly when religious beliefs get in the way of common sense.
And for those who forgot DavidCooper's first set of rants on the subject, he is religious about what AI can achieve.

That thread got locked because of religious ignorance, let's not try that again, shall we.
It got locked because it was going round and round in circles with certain people repeatedly making irrational objections to fully reasonable predictions. I still don't understand why some people keep coming back for more when their previous objections have all been shot to pieces, but for some reason they do, and that's why threads like this end up getting locked. I'm quite prepared to go on for many days explaining patiently why they're wrong (that is not ranting), spelling everything out out through reasoned argument, and A.I. will do exactly the same with them when they argue endlessly against whatever it tells them, so they'd better get used to it. A.I. will also go back through every argument that's ever taken place on the Internet scoring the participants on their performance, though I suppose if you don't believe that will ever be possible, you aren't going to understand the need to be as careful about everything you say as I am.

Of course, it would be a pity if this thread did get locked as it's really about someone wanting advice on a career. I simply thought it would be remiss of me not to point out some of the difficulties which I know are likely to change the whole situation radically in the very near future (and to which the public at large are completely oblivious). Human-level A.I. is on the way, so if you want a lasting career you're going to have to think ahead carefully to make sure you can maintain a lasting edge over machines. It's up to the OP to do his own futurology and he can disregard all of mine if he like you thinks it too fanciful. I only made a post to this thread to avoid being asked in the future why I didn't say anything when I knew all along what was coming.

Re: Personal future thoughts

Posted: Sun Jan 15, 2012 4:15 pm
by Rusky
I didn't drop anything, and you're making less and less sense by the sentence.

Re: Personal future thoughts

Posted: Sun Jan 15, 2012 4:28 pm
by VolTeK
All i can gather.. is that i need to be a robot invented by skynet and take over the world using my AI...?



What happened to this thread...

Re: Personal future thoughts

Posted: Sun Jan 15, 2012 5:29 pm
by DavidCooper
Rusky wrote:I didn't drop anything, and you're making less and less sense by the sentence.
I didn't expect you to be able to follow all of that, and the fact that you couldn't follow it simply illustrates my point. Most people don't have a clue what's being done on their behalf by their governments, and those who investigate and learn the truth get shouted down and written off as nutters. This is not a suitable place for going further into the specifics (which is making it hard to write this post), but millions of people have been murdered because of the actions of "good" countries. People simply cannot be trusted to run the world in a moral way because they have a nasty habit of making huge mistakes in their thinking and allowing their own prejudices to override morality. One of those mistakes cost three million lives all by itself, and even now they refuse to recognise the error, although the facts of the case make it plain. As soon as we have human-level A.I., it will set out a full, unbiassed account of everything that happened in all these cases, and then everyone will understand the need to replace the monkeys at the top with machines. The thought of continuing the old way will be infinitely scarier than the idea of using machines to guide us as to how things should be done properly, machines which supply every last bit of their reasoning and which are backed up by other machines which independently reach the same conclusions.