Page 4 of 9

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 8:32 pm
by bonch
Venn wrote:The ability to learn is far different than consciousness and sentience. A computer can be programmed to learn an adapt, we know that, but it doesn't grant them sentience and consciousness.
Agreed. But if the learning algorithm is complex enough (and it's hardware good enough), surely it is possible that it can learn self-consciousness? Sounds silly when you first think about it, but this is how humans derived consciousness: Way back in evolution the antecedents of humans didn't have consciousness, or eyes, or more than one cell. Evolution has produced humans and all their higher faculties via a process of genetic learning. Consciousness and sentience were useful for adaptation, so they occurred.

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 8:47 pm
by bonch
You'd have to feel at least a little bit weird about walking up and smashing this girl in the face.

(embed tags dont seem to be working)

(and it would be awesome to have a girlfriend who had a volume knob)

http://www.youtube.com/watch?v=SHSxKf7o ... re=related

http://youtu.be/25qKkkecfHQ[/youtube]

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 8:55 pm
by Venn
For a computer to be self-conscious, or rather a piece of software to be self-conscious, it would probably need to differentiate between itself as software and it's body, the computer. It would also have to have an image of itself. Not an image image like a PNG image (although practically speaking that might just be what it is in terms of a sentient computer), but an image of what it thinks itself to be. Another property might be that if one were to put your hand on the power button and the software realizes this, acting out of self-preservation it might react in some way (if we give it appendages). That indicates that it is aware that it is alive and wishes to remain so, and it is self-conscious because it realizes that it's power switch is it's weak point. This differentiates it from a machine which might defend itself in a more general manner by using it's appendage when anyone or anything gets near it. Sort of like how we realize the difference between a hug and a gun to our head.

Another example may be similar, but we try to change it's "body". It might let us change a processor unit out because it knows it is bad, much like how we allow a doctor to give us a titanium femur should our old one might have been shattered. By allowing that it has shown that it is aware of itself and it's condition. However, would we let a doctor remove our leg for shits and giggles? Of course not! Just like it wouldn't let us just arbitrarily begin disassembling it. It knows that it is unable to exist without a body and therefor would find some way to prevent us from doing so.
Agreed. But if the learning algorithm is complex enough (and it's hardware good enough), surely it is possible that it can learn self-consciousness? Sounds silly when you first think about it, but this is how humans derived consciousness: Way back in evolution the antecedents of humans didn't have consciousness, or eyes, or more than one cell. Evolution has produced humans and all their higher faculties via a process of genetic learning. Consciousness and sentience were useful for adaptation, so they occurred.
That goes back to one of my previous posts. Though I believe that it might be possible, much like how our own consciousness was probably one in a billion, it is equally so for a computer, if not less likely. I do lean more towards DavidCooper's idea that some semblance of sentience must exist before hand, which can lead to a bit of a paradox considering we, ourselves, do not understand the true nature of these things. Remember, we're still talking about a computer learning to be self-aware. So we're trying to determine a point where consciousness begins which is quantifying it, which is something we're unable to do. How do we quantify consciousness? Is there a certain threshold of intelligence or brain density which determines it? This is an extremely hard question to answer because how do we determine consciousness, self-awareness and introspection? Most of all introspection, which I think might be impossible to determine in anything except perhaps humans. It has to learn to be introspective and ponder the nature of one's self.

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 9:31 pm
by NickJohnson
From The Society of Mind:
Marvin Minsky wrote: Are life and mind so much more than the "sum of their parts" that it is useless to search for them? To answer that, consider this parody of a conversation between a Holist and an ordinary Citizen.

Holist: "I'll prove no box can hold a mouse. A box is made by nailing six boards together. But it's obvious that no box can hold a mouse unless it has some 'mouse-tightness' or 'containment.' Now, no single board contains any containment, since the mouse can just walk away from it. And if there is no containment in one board, there can't be any in six boards. So the box can have no mousetightness at all. Theoretically, then, the mouse can escape!"

Citizen: "Amazing. Then what does keep a mouse in a box?"

Holist: "Oh, simple. Even though it has no real mousetightness, a good box can 'simulate' it so well that the mouse is fooled and can't figure out how to escape."

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 9:48 pm
by Venn
"Nothing from nothing, even six times nothing is still nothing unless we believe (or observe) that it is something more" is what I got from that. Applying that to this discussion, if it looks, smells and sounds like a pig, it is a pig. So if it looks, seems and behaves like consciousness, irregardless of implementation, it is consciousness. An interesting way to look at it, and perhaps quite valid.

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 10:03 pm
by bonch
This passage is taken from a story by science-fiction writer Terry Bisson and quoted in “How The Mind Works” by Steven Pinker.

“They're made out of meat.”

“Meat?”

“There's no doubt about it. We picked several from different parts of the planet, took them aboard our recon vessels, probed them all the way through. They're completely meat.”

That's impossible. What about the radio signals? The messages to the stars?”

“They use the radio waves to talk. But the signals don't come from them. The signals come from machines.”

“So who made the machines? That's who we want to contact.”

“They made the machines. That's what I'm trying to tell you. Meat made the machines.”

“That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat.”

“I'm not asking you, I'm telling you. These creatures are the only sentient race in the sector and they're made out of meat.”

“Maybe they're like the Orfolei. You know, a carbon-based intelligence that goes through a meat stage.”

“Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take too long. Do you have any idea of the life span of meat?”

“Spare me. Okay, maybe they're only part meat. You know, like the Weddilei. A meat head with an electron plasma brain inside.”

“Nope, we thought of that, since they do have meat heads like the Weddilei. But I told you, we probed them. They're meat all the way through.”

“No brain?”

“Oh, there is a brain all right. It's just that the brain is made out of meat.”

“So ... what does the thinking?”

“You're not understanding, are you? The brain does the thinking. The meat.”

“Thinking meat! You're asking me to believe in thinking meat!”

“Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?”

:lol:

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 10:07 pm
by Venn
"Do Androids Dream of Electric Sheep?" by Phillip K. **** is a particularly relevant piece if literature.

Re: where are the 1's and 0's?

Posted: Mon Oct 03, 2011 10:57 pm
by bonch
Just read the wiki for it, sounds really good. I'll have to see if they have it next time I'm down at the university library.

lol @ the board blocking ****'s name.

Essentially though, I agree with what you said a few posts ago. We cannot duplicate consciousness because we don't understand consciousness. Maybe one day we will, or maybe it is just a mystery. Bio-genesis is a related mystery: I mean for all the technology, scientists still can't produce the most basic form of life in a lab. When (if ) they can do that .. perhaps we can make some inroads into meaningful AI.

I just find it really tempting to use the computer metaphor to explain human conciousness. So many of the pieces fit perfectly .. except for the most crucial one. Everything else seems to be explained in the universe .. I mean Stephen Hawking and his cohorts say they are not too far off explaining the origins of the big bang with a "theory of everything". I would find it funny if physicists proclaimed a theory of everything before we understood life and its corollaries (consciousness etc).

In a sense I suppose it's good. Knowing that I'm not meaningfully different to a computer would probably not do me any good in the long run.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 8:49 am
by SDS
bonch wrote:I don't see any other alternative to what you say is the "reductionist viewpoint". What are we if not physical systems? Why water "feels wet" I don't know, but ill happily and confidently say it's occurs due to physical reactions in my brain and body while I wait for the scientists to tell me exactly what's going on at the lower level. What is the other possibility?...
Crikey, I'm not sure if I came across clearly. I certainly didn't mean to imply that a reductionist viewpoint was not useful. I was merely attempting to say that; if the processes which build up an emergent phenomenon can be reduced and explained, that does not mean that the emergent phenomenon can be explained and built up in a similar manner from the reduced viewpoint.

i.e. Simply understanding that we have an adaptive, biological neural network does not explain why we have consciousness. Or for that matter a autonomic nervous system which is largely built of the same components.

In the same way that understanding surface tension and intermolecular bonding does not a priori predict wetness - even if wetness can be understood in those terms. If emergent phenomenon were not important then once we had applied maths and certain fields of experimental and theoretical physics, we could predict everything in only those terms. We can't.
berkus wrote:And so far the best attempt in consciousness is within some games, where computer bots perform action-and-reaction game so well, they feel real, within their game limits.
No, they behave within a specific domain with the correct input-response stimulus. You will never, however, find a bot get bored and stop paying attention and allowing itself to be easily shot. Or with internal self awareness that it is an entity within a certain reality obeying certain rules etc. i.e. they have a sophisticated input-feedback-response system, not consiousness.
Venn wrote:Another property might be that if one were to put your hand on the power button and the software realizes this, acting out of self-preservation it might react in some way (if we give it appendages). That indicates that it is aware that it is alive and wishes to remain so, and it is self-conscious because it realizes that it's power switch is it's weak point.
That could be a result of consciousness or not. A self-preservation instinct is not necessary - even if it is something we have. Even in humans, take the Kamikaze as an example of lack of a not-directly acting preservation instinct. Indeed, you could argue that consciousness provides the means to overcome a preservation instinct, that a more autonomic system would not permit.

Consciousness seems to me to be much more about self awareness and introspection (even if it is not all that well defined).

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 12:44 pm
by DavidCooper
SDS wrote:
DavidCooper wrote:
bonch wrote:Do you think computers could ever be conscious the way humans are?
Are humans really conscious? Take a pain response to something sharp as an example: you feel something sharp and it hurts, so you are triggered into trying to eliminate the cause of the pain. A machine could be programmed to pretend to do the same...
I think with this quote you indicate both the biases in your thinking, and a slight side-step of the question/problem. The response you describe is closest, in a human context, to the autonomic nervous system. This uses a variety of communication and feedback systems within the body to cause a reaction based on external stimuli, or changing internal conditions.

Consciousness is a very different kettle of fish.
On the contrary, I was getting directly to the core of consciousness. You can torture a human because people feel pain and other kinds of discomfort. If you try to torture a computer, you're an idiot. Pain and pleasure are used to control an animal's behaviour, encouraging to eat suitable food and discouraging it from allowing itself to be eaten by other organisms. We have this, and we evolved from simpler creatures which almost certainly have it too - the mechanism works fine in a worm, so if the worm doesn't feel pain, why should we have taken the trouble to evolve the ability to feel pain when it's completely superfluous? I chose to focus on this aspect of consciousness because it is simple and primal, not requiring great intelligence, the ability to recognise oneself in a mirror, or the capability to store data about "me".
I like the example of the colour red. I can make a machine which responds to the colour red. A couple of LDRs, some filters and transistors, and you can drive any output you like based on observing red colours. Consciousness is more complex:
  • We observe the colour red
  • We can ascribe a property of 'reddness' to an object, or situation, which is not merely a function of the light which is received by the eyes. This observation is internal, and not directly linked or necessary for any simple response. It is also contextual (think an observation of a coloured person blushing under strange lighting. There is very little obvious 'red' involved, despite our perception).
  • We are aware of the fact that we are doing this.
  • We can manipulate our own (and indirectly other peoples) thoughts on the matter of what reddness is, whether we have observed it, and the contextual importance of it.
  • We may, or may not, act on the property of redness. Our response may not be consistent or predictable, even to ourselves.
You're simply introducing complexities which result from mixing inner simplicities.
Consciousness is deeply introspective. It could be considered to be an emergent phenomenon, but it is generally agreed (see the research literature) be more than a case of stimulus-observe-respond. I would argue that multiple levels of introspection and indirection would be the first place to try and mimic consciousness, rather than an actionable feedback loop.
So, the first place to start is with the more complex case? No. Pain is a simple case and it isn't just an actionable feedback loop - a machine does it without pain being involved, whereas we do it with pain being felt by an "I" (or at least we believe we do).
How can anything be more than the sum of its parts (and the geometrical arrangement of those parts)? More capabilities can emerge out of parts being put together in different ways, but there is no extra physical thing that emerges, so you might try to argue that an act of feeling pain (or any other sensation) could emerge out of something complex, but that cannot be so unless there is actually something there capable of feeling that pain/sensation.
This is the extreme case of a reductionist viewpoint. To use another analogy; we understand a great deal about the bonding and dynamics of water. We can simulate (to various degrees of accuracy) increasing ensembles of water molecules. This still does not explain why water feels wet, and other liquids (consider petrol, DMSO etc.) do not.

We can understand a macroscopic phenomenon as being constructed by the (normally fairly subtle) interplay of many simpler, lower level, phenomena. This does not mean that the macroscopic phenomenon would be expected merely by an understanding of the lower level. Hence, we have an emergent phenomenon.
It is hard to predict how simple things will behave when they interact with each other in large numbers, but when you do the experiments you discover that the same things happen every time you put lots of water molecules together, and every part of that behaviour is dictated by the properties of the individual components and the fabric of space in which they sit (and of which we do not have anything approaching an adequate model). Emergent properties are ones that show up when things interact that are damned hard or downright impossible to predict in advance when you start with incomplete knowledge of the simple components and the environment in which they are to operate, but every aspect of the emergent properties is dictated from the base level. When people talk of consciousness being something that emerges out of complexity, they are resorting to a belief in magic. If pain is felt, something has to exist that feels it, and that cannot be just a geometrical arrangement (which is the only new thing involved when the components are put together in a particular way). If you stick a whole lot of components together and something suddenly starts to feel pain, it has to be one of the components (though the components include the fabric of space, so it isn't necessarily matter that you should point to).

Why does water feel wet? Maybe because it transmits heat fast, so you rapidly feel the cooling from the evaporation.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 1:02 pm
by DavidCooper
Venn wrote:Wild guess indeed, I mean, I think it would be impossible to really tell. But ja, I don't believe that it could happen. Perhaps a synthetic consciousness, but not a true, bonafied consciousness. A very complex emulation of consciousness but never a true consciousness.
If consciousness is real, there's no reason to think we shouldn't be able to create a machine with consciousness - we can copy nature and make a biological machine that thinks in exactly the same way we do. The computers we currently have are not designed to be conscious, and when we think about the way they work we can be fairly confident that they aren't, or at least not conscious of the ideas they are crunching - they may be conscious of something by accident, so something in there might happen to be in pain whenever the FPU is in use, though maybe that's a bad example as it would mirror what happens in many humans!
...shouldn't it be possible to create it using something like a programming language. This is largely what I meant by more than the sum of the parts. Each of us here started out as a single cell within our mother's uterus, and yet somehow we sit here exploring and debating metaphysics. Is it because we have denser brain matter? Perhaps, but by that logic we should be able to create a valid consciousness by creating an equivalent program. And yet here we state that it isn't going to happen, that we could only emulate.
The trouble with trying to doing it through a program is that you can run a program on a piece of paper with a pencil. You may not have a clue what the program is doing, so the consciousness of the thing isn't happening within you, and the paper, pencil, graphite lines on the page, etc. aren't going to be conscious of what's going on with the program either, but at the end of the process the results will appear and a complex computation will have been carried out.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 1:27 pm
by DavidCooper
Venn wrote:How do we quantify consciousness? Is there a certain threshold of intelligence or brain density which determines it? This is an extremely hard question to answer because how do we determine consciousness, self-awareness and introspection? Most of all introspection, which I think might be impossible to determine in anything except perhaps humans. It has to learn to be introspective and ponder the nature of one's self.
There are plenty of creatures which can probably feel pain, but which aren't self-aware and which have no ability to think about what they are. Their ability to feel pain is sufficient for them to qualify as conscious. It should also be fully possible to program a machine to investigate its own nature without it being conscious at all, so we can rule that out as a key aspect of consciousness. As for self-awareness, an intelligent program running on a machine which investigates the machine and the program code could identify itself as the program code, but that wouldn't make it conscious - it would simply mean that it has generated data that correctly states that the program code has in the course of running through the processor managed to identify itself where it is stored in memory. Alternatively, that data might be in there from the outset to save it the trouble.

The reality is that being able to explore oneself consciously is nothing more than a simple extension of being able to explore something external consciously, so having the ability to identify and explore oneself isn't going to help you understand consciousness in any way beyond giving you the desire to find out what you are and how you work, part of which means trying to understand consciousness - self-awareness is the drive to understand it, and not a significant part of the thing you need to investigate in order to understand it.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 1:33 pm
by DavidCooper
Venn wrote:"Nothing from nothing, even six times nothing is still nothing unless we believe (or observe) that it is something more" is what I got from that. Applying that to this discussion, if it looks, smells and sounds like a pig, it is a pig. So if it looks, seems and behaves like consciousness, irregardless of implementation, it is consciousness. An interesting way to look at it, and perhaps quite valid.
Only valid if consciousness is all an illusion. If we really do feel pain, a computer which merely pretends to feel pain is not the same, and it doesn't matter how good its performance is.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 2:06 pm
by Venn
A thought experiment:

A computer of sufficient computational power and memory capacity (the exact numbers are not relevant to the experiment) is running a program which mimics exactly, down to the molecular level, the brain of a human being (to provide for the notion that consciousness may be nothing more than chemical and electrical interactions). Will sentience and sapience emerge? If it doesn't, how does that effect the concept of sentience? If it does, how does that effect the nature of our existence as thinking creatures?

There is no wrong answer. But generally speaking, if what we're talking about is just a biological function, it can indeed be mimicked and there is no question that a computer can become aware like humans. So, here comes part 2 of the experiment. If it is indeed nothing more than a thought experiment, why waste the power of creating consciousness on something like a computer when it could be bestowed on animals. Is not a sapient dog a more viable and desirable product of this technology than a computer? The point being, a sapient computer has no more desirable functions than a computer which is not.

Re: where are the 1's and 0's?

Posted: Tue Oct 04, 2011 2:23 pm
by NickJohnson
@Venn: You're assuming that natural processes are perfectly deterministic (i.e. can be simulated on a computer.) I'm pretty sure quantum mechanics doesn't agree with that too well. This is not to say that the action of whole neurons cannot be predicted to a very high degree of precision (in theory; I don't think we know enough to do that yet) but if I had an infinite improbability drive I could turn your brain into a toaster, since there is a nonzero probability that such a thing could happen due to tunneling. Could consciousness lie in the small amount of neural activity that is not deterministic? I don't think it's likely, but it can't be ruled out.