Personal future thoughts

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Personal future thoughts

Post by Solar »

DavidCooper wrote:People act and lie, but they all have a history. A.I. will know a lot of their history and will have a fair idea about how reliable they are. It might be sufficient just to collect people's opinions of each other as that would allow them to be given reliability ratings based on their considerable knowledge of each other. People would soon be sorted into a number of different groups, one of which would contain those who are extremely reliable while the group at the opposite extreme would contain utterly unreliable ones - the system would only need to know a few things about a small number of these people to work out which group is at which extreme. Add into that the knowledge collected by A.I. through direct observation of people's behaviour and it's easy to see how such a system would know how to weight the evidence from different sources.
You have no idea how scary it is to read that somebody actually and genuinely believes this to be "a good future"...
Every good solution is obvious once you've found it.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Personal future thoughts

Post by DavidCooper »

Solar wrote:You have no idea how scary it is to read that somebody actually and genuinely believes this to be "a good future"...
If it clearly works, we'll do it that way. If it clearly doesn't, we won't. We're only going to let this stuff take control if we trust it to do a good job, and it'll have to demonstrate that it can first, which it will at some point manage to do. Of course, for people who enjoy abusing others it isn't going to be such a fun world to live in if they can't get away with it any more, but for the rest of us it will be much better.

You appear to be scared of the whole idea of intelligent machines (and there is indeed plenty of room for rational fear of them), but I don't find it comfortable living in a world where mad people have their fingers on nuclear buttons (and they have to be credibly mad for the enemy to believe they'd be prepared to press the things). We need to take apart the whole system and find an impartial judge (a machine) to sort out all the grievances which lead to conflict so that there is no more need of weapons and armies. I am sure that it will be possible to do this - the key problem we've always had up to now is that there is no fully-impartial and fully-intelligent human available to settle our disputes and arguments, but there will be something that is fully-impartial and fully-intelligent before long which can fill that role, and that's going to change the whole game. The software will have to be open so that everyone can see that there's nothing dangerous hidden in it, and we'll want to have more than one independently-designed system doing the job so that their pronouncements can always be compared.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Personal future thoughts

Post by Rusky »

Dystopian novels are generally not the way to predict the real world.

A Central Computer that runs everyone's lives, or an AI that authoritatively categorizes people as "reliable" and "unreliable," or an AI that somehow magically makes the jump from a human-controlled tool to a GLaDOS-like overlord, are not good ideas nor are they really genuine threats.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: Personal future thoughts

Post by Combuster »

Rusky wrote:Dystopian novels are generally not the way to predict the real world.
Yet they have been good at it. See 1984.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Personal future thoughts

Post by Solar »

@David:

You see the flaws in humans and believe that machines are the answer.

I see the flaws in humans and human-made machines and believe that anything that we could come up with - not "we" as in "the smart people capable of dreaming up things" but as in "the people with the money and power to actually build things for profit" - will be likewise flawed.

I am not afraid of intelligent machines. I am afraid of the hubris and incompetence of the people who will build them. And the blind-eyed gullibility of people believing that a human-build, human-configured, human-data-fed machine could be of super-human impartiability and justness.
DavidCooper wrote:We need to take apart the whole system and find an impartial judge (a machine) to sort out all the grievances which lead to conflict...
Please, go study Sociology. You'll be surprised how little an "impartial judge" would help with removing the cause of conflicts.(*)

(*) I haven't studied Sociology, but being married ten years to a wife who has rubs off on you.
Every good solution is obvious once you've found it.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Personal future thoughts

Post by Rusky »

Combuster wrote:
Rusky wrote:Dystopian novels are generally not the way to predict the real world.
Yet they have been good at it. See 1984.
1984 is a perfect example of what I'm talking about. It's an exaggerated portrayal of real problems- not a realistic one.
Solar wrote:the blind-eyed gullibility of people believing that a human-build, human-configured, human-data-fed machine could be of super-human impartiability and justness.
Most things humans build are super-human in some aspect or another. That's the point.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Personal future thoughts

Post by Solar »

Speed, yes. Strength, yes. Memory capacity for raw data, yes. They can even beat humans in games of well-described rules.

But in 55 years of research, trying to get their super-humanity transferred from these "hard" benchmarks into the "soft" world - association of information, drawing of conclusions - billions of research funds have been invested, with very little to show for it.

And I don't think they ever will.

Every judge has a wealth of experience. Starting with the love of a mother, hunger, thirst, fear, comfort. Argueing with siblings. Slowly learning how to settle conflicts, first as a child, then as an adolescent, then as a grown-up. Learning what works and what doesn't. Learning to judge people. Being lied to, being deceived. Feeling greed, envy, hatred, all the emotions that actually drive human people. Developing that "moral fibre" that makes a judge hesitate before sending someone into prision for life if the evidence made it over a threshold value just so. Developing professional experience, building a reputation, being judged by other judges, constantly. And perhaps most importantly, eventually getting old and retiring, making room for new judges with different experiences who aren't just "Judge v2.0" copycat versions of himself with a larger database.

In the end, it doesn't matter. I hereby bet you that such "AI arbiters" that are actually considered a human's equal in competence so that they actually could replace humans won't see the day of light within my lifetime as an active software developer (that'd be another 30-35 years) - simply because there's little money to be made in developing them.
Every good solution is obvious once you've found it.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Personal future thoughts

Post by Rusky »

Solar wrote:But in 55 years of research, trying to get their super-humanity transferred from these "hard" benchmarks into the "soft" world - association of information, drawing of conclusions - billions of research funds have been invested, with very little to show for it.

And I don't think they ever will.
That's a rather limited view of the situation.

Yes- for 55 years, the AI people have been barking up the wrong tree. They win at chess by throwing massive computational power at the problem. Things like Google and Watson do a little better by taking advantage of a massive database combined with actual pattern matching, but they're still mostly just brute-forcing things.

What you should be looking at is neuroscience. The neuroscience people have been making major progress recently. That's where On Intelligence's theory comes from- not AI, but neuroscience. These developments are more recent than traditional AI, so you can't really discount them just because we've failed at a different type of AI. (i.e. deterministic/algorithmic "intelligence")
Solar wrote:Every judge has a wealth of experience. Starting with the love of a mother, hunger, thirst, fear, comfort. Argueing with siblings. Slowly learning how to settle conflicts, first as a child, then as an adolescent, then as a grown-up. Learning what works and what doesn't. Learning to judge people. Being lied to, being deceived. Feeling greed, envy, hatred, all the emotions that actually drive human people. Developing that "moral fibre" that makes a judge hesitate before sending someone into prision for life if the evidence made it over a threshold value just so. Developing professional experience, building a reputation, being judged by other judges, constantly. And perhaps most importantly, eventually getting old and retiring, making room for new judges with different experiences who aren't just "Judge v2.0" copycat versions of himself with a larger database.
You still seem to be thinking of AI as some kind of engineered software product. That's not what I'm talking about at all. The kind of AI I'm talking about - one grown from genetic algorithms, exactly like a human's - would also have a wealth of experience and evolutionary background, both pointing toward the properties we want judges to have.

It may not "grow up" in real human society or be able to feel empathy the way humans do, but it will certainly be capable of the same results as a human judge's "moral fibre," it will certainly be capable of nuanced decisions about evidence, and it will certainly develop professional experience and interact with other judges. I don't know why it matters what mechanism gives out sentences as long as it's considered fair by the humans involved, but if it does happen to matter, that's just one parameter that AI's could be selected by- human-compatible reasoning.

I would go so far as to say that this kind of AI, developed for dealing with humans as opposed to something weather prediction, would unambiguously have some of the same fundamental rights as humans.
Solar wrote:In the end, it doesn't matter. I hereby bet you that such "AI arbiters" that are actually considered a human's equal in competence so that they actually could replace humans won't see the day of light within my lifetime as an active software developer (that'd be another 30-35 years) - simply because there's little money to be made in developing them.
That may be true, but I'm not really contesting that. I'm only saying that it is a technical possibility, because human brains are not powered by some metaphysical nonsense that makes a specially-designed computer incapable of doing the same exact things.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Personal future thoughts

Post by Solar »

Rusky wrote:You still seem to be thinking of AI as some kind of engineered software product. That's not what I'm talking about at all.
Structured Analysis, OOD, Genetic Algorithms - different approaches to Software Engineering.
Rusky wrote:It may not "grow up" in real human society or be able to feel empathy the way humans do...
Thank you. That's exactly what I was pointing at. Whatever capabilities that "AI judge" might have, they are lacking the background of a human judge, and the results will only as long be "the same" as the guesses and provisions of the engineers that did build the "AI judge" do hold true.

Which brings us right back to the point of human failure.
I don't know why it matters what mechanism gives out sentences as long as it's considered fair by the humans involved...
It matters for me, I do hope it matters for a majority of people, and I do hope that by the time all this comes to pass majorities still count.
That may be true, but I'm not really contesting that. I'm only saying that it is a technical possibility, because human brains are not powered by some metaphysical nonsense that makes a specially-designed computer incapable of doing the same exact things.
I don't argue "soul" here. I argue that a human judge is so much more than laws and rules, and that no little part of that is far beyond what a "computer judge" will be able to experience in any forseeable future. That starts with being an infant in your mother's womb, goes through all the various experiences of childhood and adolescence, and never really stops. Yes, if you can make a "computer judge" experience all that, while slowly growing from infantile helplessness into power of its own, including all those things that have nothing to do with being a judge but everything with being a human, understanding what being a human means, understanding what other humans might have experienced to motivate their actions (which would include understanding what it means to being a love, or being a parent) - then yes, that might be a good judge, or perhaps just a good gardener, because not everyone grows up to be a good judge. And no, I wouldn't contest his / her rights to being considered an individual.

But do you see how much your definition of "a good judge" and my definition of "a good judge" are apart? No, it is not just the results that count, it is the trust you can put into the next decision not being some freak malfunction.
Every good solution is obvious once you've found it.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Personal future thoughts

Post by Rusky »

Solar wrote:
Rusky wrote:It may not "grow up" in real human society or be able to feel empathy the way humans do...
Thank you. That's exactly what I was pointing at. Whatever capabilities that "AI judge" might have, they are lacking the background of a human judge, and the results will only as long be "the same" as the guesses and provisions of the engineers that did build the "AI judge" do hold true.
My point is that an AI judge doesn't have to have the background of a human to make the same kinds of decisions as a human. It could, but then you may as well just use a human, with all its other motivations and susceptibility to social engineering. An AI would instead have experience and context from evaluating cases, comparing them to what its creators consider good decisions from human judges, and it would wind up with the same ideas of what to do, ideally without problems like pride or personal agendas or anything else that can cloud someone's judgement.
Solar wrote:Which brings us right back to the point of human failure.
I don't know why it matters what mechanism gives out sentences as long as it's considered fair by the humans involved...
It matters for me, I do hope it matters for a majority of people, and I do hope that by the time all this comes to pass majorities still count.
You're falling into the trap of perfectionism. Of course an AI judge will have shortcomings! The point of this style of AI is not to eliminate problems, because a "perfect" judge isn't even well-defined, and no human will ever be one even if it were. The point is to have less problems than a human. If an AI judge/policeman/etc. can find just one more overlooked pattern or detail in the evidence that proves someone's innocence, it's worth it.

You should also note that I'm not advocating that we turn over the entire judicial branch to robots. That's my reason for dismissing dystopian scenarios with evil AI's taking over the world. Humans will be the ones to create these AI's, humans will be the ones to give them any political or physical power, and humans will be the ones to regulate it. There's also no reason for AI and humans to work together on things like this.
Solar wrote:I don't argue "soul" here. I argue that a human judge is so much more than laws and rules, and that no little part of that is far beyond what a "computer judge" will be able to experience in any forseeable future. That starts with being an infant in your mother's womb, goes through all the various experiences of childhood and adolescence, and never really stops. Yes, if you can make a "computer judge" experience all that, while slowly growing from infantile helplessness into power of its own, including all those things that have nothing to do with being a judge but everything with being a human, understanding what being a human means, understanding what other humans might have experienced to motivate their actions (which would include understanding what it means to being a love, or being a parent) - then yes, that might be a good judge, or perhaps just a good gardener, because not everyone grows up to be a good judge. And no, I wouldn't contest his / her rights to being considered an individual.

But do you see how much your definition of "a good judge" and my definition of "a good judge" are apart? No, it is not just the results that count, it is the trust you can put into the next decision not being some freak malfunction.
What I'm saying is that there is a foreseeable future in which AI can do the same things as the human brain. What I'm also saying, however, is that exactly duplicating human experience, while interesting, would not be beneficial. We've got plenty of humans. The interesting part is figuring out how to adjust the AI's evolution and experiences such that it is more accurate and more discerning than a human.

An AI could certainly understand human experience, motivation, etc. That's only the first (and necessary) half of empathy- the second is sharing those feelings. Because an AI would not be an exact duplicate of a human, it could be developed have understanding. Normal psychology need not apply to synthetic brains. Like I said before, there's no objective way to define the "coldness" of a decision once the parties involved are brain-like black boxes - I'm not sure how you can back up your position without resorting to "soul."

Any mistakes such an AI made wouldn't be "freak malfunctions" any more than a human judge's mistakes. An AI would need to be more trustworthy not to have a "freak malfunction" to be anything more than an experiment. In any case, like I said before, "freak malfunctions" are to be expected and the goal is to reduce them.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Personal future thoughts

Post by Rusky »

In any case, AI is useful for a lot of things besides deterministic car-driving and judging. I think it would be most useful for things like user interfaces, weather analysis, investigating crimes (before they get to court), research, business and economic analysis, psychological and sociological studies, playing Jeopardy, etc.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: Personal future thoughts

Post by DavidCooper »

berkus wrote:I won't summarize it here, though it can be done in one sentence. The story is too good and much deeper than one sentence could ever express. It is about a genuine threat, I can assure you. Either way go and read it, it's very well worth it.
I've added it to my list of books to give a go some day, but for now I'd prefer to see the one-sentence version, if it doesn't spoil too much of the plot.

_______________________________________________________________________________
Combuster wrote:
Rusky wrote:Dystopian novels are generally not the way to predict the real world.
Yet they have been good at it. See 1984.
Without A.I. taking charge, 1984 is exactly what you're going to end up with.

_______________________________________________________________________________
Solar wrote:I see the flaws in humans and human-made machines and believe that anything that we could come up with - not "we" as in "the smart people capable of dreaming up things" but as in "the people with the money and power to actually build things for profit" - will be likewise flawed.
That possibility worries me too - with the wrong kind of A.I. these things could be designed to favour some people over others, and that's going to be a particular danger with military robots which could be designed to carry out genocide. However, it should not be to your disadvantage to have military robots with proper morals built into them, and if all sides had them they wouldn't even bother shooting at each other because they'd effectively be on the same side. The people of this planet will collectively want peace and fairness to prevail, so money-making and abusive-power motivations will not win the day.
I am not afraid of intelligent machines. I am afraid of the hubris and incompetence of the people who will build them. And the blind-eyed gullibility of people believing that a human-build, human-configured, human-data-fed machine could be of super-human impartiability and justness.
A calculator is impartial. When it adds up a bill, it doesn't favour the seller by coming to a higher total than the sum of the items being bought, and nor does it favour the buyer by dropping a few dollars here and there. An intelligent system with biasses built into it might be desired by people who put their political beliefs before reason, but if you program a bias into it to favour white people over blacks or even to try to exterminate blacks once it has taken over the world, it could bite you in the butt - your children might marry people who aren't genetically pure white and your grandchildren could be exterminated by the machines you programmed. Anyone building biasses into a system is a fool. The aim has to be to create a system which builds up all its judgements from first principles.
Please, go study Sociology. You'll be surprised how little an "impartial judge" would help with removing the cause of conflicts.(*)
It's never been done. People on both sides inevitably assume the judge is biassed against them, but that would not be the case with a machine which was working based entirely on open rules which leave no room for bias to get in other than through lack of information on the thing being judged. If all the data available is fed into the system, a judgement (or whole set of judgements) can then be made based on the best possible analysis of that data, making it possible for most people to accept the decisions and move on. The people who will continue to make a fuss will be the terrorists (many of them members of governments), but a full anlysis of all their crimes will be laid out for all to see and they will be put in prison where they belong for the rest of their lives. There will be no hiding place for such people, and they will no longer be protected by the governments they used to work for.
In the end, it doesn't matter. I hereby bet you that such "AI arbiters" that are actually considered a human's equal in competence so that they actually could replace humans won't see the day of light within my lifetime as an active software developer (that'd be another 30-35 years) - simply because there's little money to be made in developing them.
The cost of wars, disputes, corruption and other crime is astronomical - there should be plenty of money available to reward the development of permanent solutions to these problems, but there's a much bigger motivation which is simply to create a better world. I want to live in a world where people I care about are safe, and I care about all seven billion of them, minus how ever many bad ones there are who cause all the rest endless grief (though even with them I want to be sure that they'll be treated fairly) - to create such a system would be ample reward in itself, and well worth dedicating your life to.

_______________________________________________________________________________
Rusky wrote:Yes- for 55 years, the AI people have been barking up the wrong tree. They win at chess by throwing massive computational power at the problem. Things like Google and Watson do a little better by taking advantage of a massive database combined with actual pattern matching, but they're still mostly just brute-forcing things.
That is very true. People's ability to apply reason is generally poor, their knowledge is often abysmal, and yet they can still manage to do amazing things despite the lack of processing they apply to problems. It looks as if it should be dead easy to create an intelligent system, and yet it isn't - anyone who tries it instantly gets bogged down in complex linguistics for decades. A massive database and brute force will not crack the nut - you have to do the hard work of untangling the mess of concepts which we work with in our heads, and that's what all the serious contenders in the race are now set on doing. At least one contender has finished that task.

_______________________________________________________________________________
Solar wrote:Whatever capabilities that "AI judge" might have, they are lacking the background of a human judge, and the results will only as long be "the same" as the guesses and provisions of the engineers that did build the "AI judge" do hold true.
If an alien species came here with the intention of running the world in a moral way, they would study people to find out what makes them tick, they would learn which things they don't like having done to them (e.g. being killed) and how much they dislike these things, and then they'd sort out a proper system of laws to apply to the humans which would then be enforced so that the humans would very quickly learn to stop abusing each other. This could be done without the aliens being humans, and it isn't hard because the way morality works is universal - it's just a matter of minimising harm (though expressly not by means of eliminating all the things that are capable of being harmed). A machine could do the same job as the aliens, though it would also do it far better because it wouldn't make all the mistakes that natural intelligent systems do.
No, it is not just the results that count, it is the trust you can put into the next decision not being some freak malfunction.
How many freak malfunctions occur in judgements the way they're made now? How many freak malfunctions will you get with three independently-designed A.I. judges making decisions where they should always agree with each other?
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Personal future thoughts

Post by Rusky »

You're not helping your case, DavidCooper. You can't have the kind of AI you want and still have it be programmed to be impartial or moral (whatever those mean). If you want any kind of reasonable intelligence you lose the "calculator" side of things.
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: Personal future thoughts

Post by Solar »

DavidCooper wrote:How many freak malfunctions occur in judgements the way they're made now?
It happens. That's when you go into appeal. And because the jurisdiction knows that errors happen, you can go into appeal.

Now what happens when the RoboJudge in the appeal court would be the exact same software as the one in the first instance, so the chances of an appeal getting a different result are next to nil? Right, they'll remove appeal from the system.
How many freak malfunctions will you get with three independently-designed A.I. judges making decisions where they should always agree with each other?
Ah... reality check: How big are the chances that the state - who, after all, is responsible for jurisdiction - will commission three independently-designed A.I. systems (meaning three different contractors), and the result being non-crap?
Every good solution is obvious once you've found it.
davidv1992
Member
Member
Posts: 223
Joined: Thu Jul 05, 2007 8:58 am

Re: Personal future thoughts

Post by davidv1992 »

DavidCooper, you seem to be very sure about the fact that what people want is peace and fairness. However the unfortunate fact is that we humans haven't evolved as a hyve species, with a result that almost all of us give the choice will go for that which is best for them, not for the group. Things like crime come from this because regardless of what the rest of humanity does, if you can get away with a crime, it will be in your advantage in one way or another.

This is why creating a good political system in a country is so damn hard, because people will choose themselfes (and perhaps their immediate relatives) above the rest of the humans in this world. The only reason why some governments manage to be reasonbly good is because the system underlying them actively rewards choosing whats best for the rest of humanity, so that it becomes the best choice when only taking themselfes into account.

As such it will be only a matter of time before any automated judging system will be tampered with, because it generates a net gain for those doing it. The risk then comes in that we know that humans can be tampered with, but a large part of the society wont look at an automated judge the same way, and WILL think it is perfect, with all consequences of that.

Furthermore you make arguments to the reasonabilty of people. Looking even a short while back in time shows that given the right conditions people are inreasonable, and looking at the current political landscape here in the netherlands, most still are.
Post Reply