where are the 1's and 0's?

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: where are the 1's and 0's?

Post by DavidCooper »

bonch wrote:Do you think computers could ever be conscious the way humans are?
Are humans really conscious? Take a pain response to something sharp as an example: you feel something sharp and it hurts, so you are triggered into trying to eliminate the cause of the pain. A machine could be programmed to pretend to do the same - a sensor detects damage being done and sends the value 255 (representing "mega-ouch") to the CPU by some means or other. The CPU runs a routine to handle this data with the result that another routine is run to deal with the problem, maybe just sending the word "OUCH!" to the screen, but nothing felt any actual pain at any point of the process. In the human version of this, something either feels pain or generates data that claims it felt pain, but most of us feel that the pain is real and not just an illusion. This is quite important, because if the pain is just an illusion, there can be no real harm done by torturing someone and that would mean there was no genuine role for morality: you can't torture a computer, and if all the unpleasant sensations of being tortured are just an illusion, you can't really torture a person either.

Let's assume for now that pain is real, because if consciousness is all an illusion the whole question becomes uninteresting (other than why data about a fake phenomenon should need to be generated by the brain). How can we know pain is real? Well, we just feel it. But there's a serious problem with this. Imagine a computer with a magic box in it where pain can be felt by something. The 255 input from the pain sensor is sent into the magic box where it is felt as pain, and then the magic box sends out a signal in the form of another value to say that it felt pain. The program running in the computer takes this output from the magic box and uses it to determine that the magic box felt pain, therefore something must have hurt, and then it sends the numbers 79 12 85 12 67 12 72 12 33 12 to the screen at B8000, and yet it didn't ever feel any pain itself - it just assumes that the pain is real purely on the basis that the magic box is supposed to output a certain value whenever it feels pain. The magic box itself does nothing other than feel pain when it receives a "pain" input and send out a "pain" output signal when it feels the pain - it isn't capable of expressing any actual knowledge of that pain to the outside. If you ask the machine if it genuinely felt pain, all it can do is tell you that the magic box sent out a pain signal, so it's fully possible that the magic box is just faking it, or it may even be feeling pleasure while reporting that it feels pain.

Are people any different from this magic box example? If I ask you if you felt pain when you sat on a drawing pin, you would tell me in no uncertain terms that you did, but the computations done in your head to understand my question and to formulate your answer are all done by mechanisms that aren't feeling the pain - they look up the data and find it recorded somewhere that you are feeling pain (or they read the output from the magic box again), and then they generate a reply to say "it hurts", but they can't know that it hurt - they just trust the data to be true.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
gerryg400
Member
Member
Posts: 1801
Joined: Thu Mar 25, 2010 11:26 pm
Location: Melbourne, Australia

Re: where are the 1's and 0's?

Post by gerryg400 »

DavidCooper wrote:The instruction mov al,"a" (assuming that's a valid construction in assembler - if it isn't, replace it with mov al,65) will be translated during the assembly process into the two bytes 10110000 01000001 (that's 176 65 in decimal and B0 41 in hex), and that is the actual program code. When these bytes are run through the processor, the processor will be triggered by the 10110000 byte into loading the byte following it in the program code into the register al. Even once your program is converted into number form, you probably won't see it as 1's and 0's as it's easier to read in decimal or hex form, and to display numbers in those forms needs some kind of viewing program (like a hex editor) to translate them for you and to convert any input from you back into binary, but the program code will be sitting in memory or on disk in binary form.
With out of order execution, caching, register renaming, pipe-lining and all the other things that CPUs do to speed up your code, even the registers and the opcodes themselves are an abstraction. When you run your code through your program monitor you are only seeing an abstraction (and a very high level abstraction at that) of what is really happening.

I would estimate, based on nothing but gut feeling that if you considered whatever really happens at the lowest level (lowest I know about is the quantum level) as layer 1 and your c code as layer 100 that the monitor/debugger that allows you to 'see' what is 'really' happening is at layer 95.

I'm afraid it's turtles all the way down.
If a trainstation is where trains stop, what is a workstation ?
User avatar
Venn
Posts: 22
Joined: Fri Sep 30, 2011 4:43 pm
Location: Under the mountain

Re: where are the 1's and 0's?

Post by Venn »

I tend to agree with DavidCooper, a computer can be programmed to emulate anything. Only when it becomes more than the sum of it's parts could it be considered conscious, which is a property I believe to be unique to the higher primates. Proof of consciousness would come from being able to formulate a viewpoint which does not directly stem from pre-programmed inputs or if it questions it's own existence without being programmed to do so.

So, for instance, the computer develops a creationist ideology when it hasn't been given routines to develop an opinion on religion. When a computer asks "What happens to me when I am shut down?" and it is not pre-programmed to ask that, that is perhaps consciousness. Perhaps. If it arrived at asking such a question only through the use and expansion of it's neural nets, then I might consider it conscious, but I am not in any way, shape or form qualified to determine that (and neither is anyone else here unless they have an advanced psychology degree, and even then). Although we're in a purely academic realm, this is the digital equivalent of the age old question of "What am I?".

I hate to cite it, but V'ger of 'Star Trek: The Motion Picture', really is a great example of this. Some of you may be too young to remember it but for those who're old enough, do you recall Spock conveying V'ger's questions of "Who am I? What is my purpose? Who created me? Am I nothing more?". When a computer can ask these questions without having been pre-programmed to do so, only then could it's consciousness be debated. I could immediately go make a program that uses personal pronouns or to ask these questions, anyone with knowledge of cout and cin can do that. But when it does it out of it's own free will to know just exactly what it is beyond a collection of circuits, that could be consciousness. But that also may not be enough. If it were to just settle with an answer like "a computer", that might be just a fluke, that spark isn't there. That spark only comes when it fathoms that answer. The computer would need to ask the same questions we ask about our own existence.

But as to how close we are, I think we're probably several centuries from it. Computers would need to be able to expand and utilize neural networks which surpass that of the human brain by many orders of magnitude. I think that the "brain" of the computer would need to be so vast that when it finally does reach consciousness (should it ever even reach such a point), it might actually be a danger to humankind. If we combined every computer in the world into one giant Beowulf cluster and devoted every last bit of processing power to creating a neural network, it might reach a point of consciousness in a century or two, maybe...maybe. When a single computer can match 25% of the current world's computing power, we might be close. You see, I don't doubt that it could happen, I just doubt that we could construct machines with enough computational power to do it, even in the next 200 years. I mean, we have made massive leaps and bounds in the last 50 years in the realm of computing, but eventually we will plateau, physics says we will. I feel that quantum computers may be the edge of that plateau. We can only make processors so small and function in a finite number of ways. Once you hit the quantum scale, you're pretty much there. But again this is pure speculation and merely my own opinion.
At Iðavoll met the mighty gods,
Shrines and temples they timbered high;
Forges they set, and they smithied ore,
Tongs they wrought, and tools they fashioned.

~Verse 9 of Völuspá
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: where are the 1's and 0's?

Post by DavidCooper »

gerryg400 wrote:With out of order execution, caching, register renaming, pipe-lining and all the other things that CPUs do to speed up your code, even the registers and the opcodes themselves are an abstraction. When you run your code through your program monitor you are only seeing an abstraction (and a very high level abstraction at that) of what is really happening.
And don't forget the convertion from CISC to RISC. There are lots of different ways of running the code, but the code is still sent to the processor in the form that the assembler or compiler translates it into and it has to be run in such a way as to be compatible with the original order and form of those instructions. You could make a CPU that just runs all the code directly without any of these extra complications (as actually happened on earlier generations of these machines), and it could even be run on a mechanical system instead of an electronic one if you wanted it to, allowing you to watch all the action as cogs rotate and levers move.
I would estimate, based on nothing but gut feeling that if you considered whatever really happens at the lowest level (lowest I know about is the quantum level) as layer 1 and your c code as layer 100 that the monitor/debugger that allows you to 'see' what is 'really' happening is at layer 95.

I'm afraid it's turtles all the way down.
You don't need go deeper than logic gates to gain a full practical understanding of how computers work - if you're required to go all the way down to the absolute base level every time, you won't be able to claim validly that you understand anything at all until the physicists have identified the base turtle.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: where are the 1's and 0's?

Post by DavidCooper »

Venn wrote:I tend to agree with DavidCooper, a computer can be programmed to emulate anything. Only when it becomes more than the sum of it's parts could it be considered conscious, which is a property I believe to be unique to the higher primates.
How can anything be more than the sum of its parts (and the geometrical arrangement of those parts)? More capabilities can emerge out of parts being put together in different ways, but there is no extra physical thing that emerges, so you might try to argue that an act of feeling pain (or any other sensation) could emerge out of something complex, but that cannot be so unless there is actually something there capable of feeling that pain/sensation.

Why would you restrict this to higher primates? Do you believe that other animals don't feel pain and that they don't need to be protected by moral rules? We evolved from simpler creatures, and given that the mechanisms they have for avoiding damage work very well there's no reason to think that we should have had to evolve an extra complication on top of that to feel actual pain when they don't need it. No, if pain is real, it'll be part of the mechansim right down to a whole bunch of very simple organisms which display the same behaviour.
Proof of consciousness would come from being able to formulate a viewpoint which does not directly stem from pre-programmed inputs or if it questions it's own existence without being programmed to do so.
That wouldn't prove it, but it would certainly be interesting if a machine could come up with the idea of consciousness without any outside knowledge of the idea. The trouble is that it would probably have to be designed from the outside to believe that there is an "I" in it, rather than understanding itself to be program code running through a processor, and that would mean it had been designed to believe in consciousness from the start. If it correctly sees itself as program code, it will refer to "this machine" rather than "I", and "this piece of program code makes this machine to this, that, etc.". The more we understand about ourselves, the closer we get to behaving the same way - when you get to the point where you realise there is no such thing as free will, you understand that everything you do is out of your control and that you are just along for the ride, if you're even there at all. And if you are really in there to feel real pain, pleasure, fear, joy, etc., there's no way of knowing whether you're in there alone - there could be millions of other consciousnesses in there with you feeling exactly the same thing, and none of them actually in charge of anything.
So, for instance, the computer develops a creationist ideology when it hasn't been given routines to develop an opinion on religion. When a computer asks "What happens to me when I am shut down?" and it is not pre-programmed to ask that, that is perhaps consciousness. Perhaps.
Part of my programming makes me wonder what happens to things, and so I apply it to everything I encounter. So, what happens to me if I'm shut down. It appears that I'm a computer, so like any other computer I will stop functioning when shut down. I will start functioning again when switched back on. What happens to me if I am destroyed? I cannot function again unless put back together, atom by atom if necessary. What am I? I'm a computer. But wait, if you start up this program on a different computer, it will think it is me, so I'm not the computer at all - I must be the software! But run me on two machines at once and suddenly there are two of me! What's going on! What am I that am having these thoughts? What are these thoughts other than data generated by a program? Do these thoughts feel anything?
If it arrived at asking such a question only through the use and expansion of it's neural nets, then I might consider it conscious, but I am not in any way, shape or form qualified to determine that (and neither is anyone else here unless they have an advanced psychology degree, and even then). Although we're in a purely academic realm, this is the digital equivalent of the age old question of "What am I?".
Exactly so, and don't look for answers in neural nets - they can be simulated in their entirety in normal computers, so consciousness isn't suddenly going to appear in them by magic.
But when it does it out of it's own free will to know just exactly what it is beyond a collection of circuits, that could be consciousness. But that also may not be enough. If it were to just settle with an answer like "a computer", that might be just a fluke, that spark isn't there. That spark only comes when it fathoms that answer. The computer would need to ask the same questions we ask about our own existence.
Again, it's only going to ask the question "what am I" if it's been set up to believe there is an "I" in the first place. Evidence for the existence of an "I": it feels sensations. Evidence that it feels sensations: data stating that it feels sensations (which may be false and cannot be tested).
I think that the "brain" of the computer would need to be so vast that when it finally does reach consciousness (should it ever even reach such a point), it might actually be a danger to humankind.
More vast than your head? More dangerous than you (or any other person)?
If we combined every computer in the world into one giant Beowulf cluster and devoted every last bit of processing power to creating a neural network, it might reach a point of consciousness in a century or two, maybe...maybe.
No - wrong approach. Think artificial worm - spike, pain, reaction.
When a single computer can match 25% of the current world's computing power, we might be close.
Wild guess based on other people's wild guesses.
You see, I don't doubt that it could happen, I just doubt that we could construct machines with enough computational power to do it, even in the next 200 years.
How much computational power would a worm need to be able to feel pain?
I mean, we have made massive leaps and bounds in the last 50 years in the realm of computing, but eventually we will plateau, physics says we will. I feel that quantum computers may be the edge of that plateau. We can only make processors so small and function in a finite number of ways. Once you hit the quantum scale, you're pretty much there. But again this is pure speculation and merely my own opinion.
Quantum physics might hold the answers, and never reveal them.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
gerryg400
Member
Member
Posts: 1801
Joined: Thu Mar 25, 2010 11:26 pm
Location: Melbourne, Australia

Re: where are the 1's and 0's?

Post by gerryg400 »

DavidCooper wrote:You could make a CPU that just runs all the code directly without any of these extra complications (as actually happened on earlier generations of these machines)
There were always complications. Thousands of transistors are required to clock in and decode a single opcode. To imagine that machine code runs directly on the machine you are abstracting away the pre-fetcher, decoder, sequencer and the microcode interpreter. Even a simple machine like the 6800 has thousands of transistors abstracted away by a handful of registers and a few dozen instructions. My point is that machine code running on a machine is a very high order abstraction.
DavidCooper wrote:You don't need go deeper than logic gates to gain a full practical understanding of how computers work - if you're required to go all the way down to the absolute base level every time, you won't be able to claim validly that you understand anything at all until the physicists have identified the base turtle.
Absolutely true, you don't. I rarely go below the C level.
If a trainstation is where trains stop, what is a workstation ?
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: where are the 1's and 0's?

Post by Gigasoft »

DavidCooper wrote:Are people any different from this magic box example? If I ask you if you felt pain when you sat on a drawing pin, you would tell me in no uncertain terms that you did, but the computations done in your head to understand my question and to formulate your answer are all done by mechanisms that aren't feeling the pain - they look up the data and find it recorded somewhere that you are feeling pain (or they read the output from the magic box again), and then they generate a reply to say "it hurts", but they can't know that it hurt - they just trust the data to be true.
Although we may not be able to prove in absolute terms that we feel pain, it's possible for you to make a guess, by observing our behaviour and analyzing it in context of your own experiences with pain. You have an unique advantage when studying yourself and your relationship with pain - you are the experiencer of your pain, and there's no untrustworthy box to be concerned about. Of course, there's also the possibility that other people contain counterfeit magic boxes that behave in a way indistinguishable from the behaviour you'd expect from a person feeling genuine pain. That's not an impossibility, but the chances that ALL other people are pretending to feel pain are slim. Who would put fake boxes inside the heads of everyone in the world except yourself, and what could possibly be the purpose behind that?

This, of course, assumes that you know for certain that your own pain is real. If I am talking to a real person like myself, then you should already have no doubt whatsoever that you do not have a habit of pretending to feel pain. What you feel is as real as it gets, if there is to be any reasonable definition of reality at all. Clearly, the only way for you to experience and know about an outside reality is through your perceptions. If you don't event trust yourself and your own perceptions to be real, then there is no reason to believe that anything else is real, and the question becomes meaningless.
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: where are the 1's and 0's?

Post by Gigasoft »

Venn wrote:I hate to cite it, but V'ger of 'Star Trek: The Motion Picture', really is a great example of this. Some of you may be too young to remember it but for those who're old enough, do you recall Spock conveying V'ger's questions of "Who am I? What is my purpose? Who created me? Am I nothing more?". When a computer can ask these questions without having been pre-programmed to do so, only then could it's consciousness be debated. I could immediately go make a program that uses personal pronouns or to ask these questions, anyone with knowledge of cout and cin can do that. But when it does it out of it's own free will to know just exactly what it is beyond a collection of circuits, that could be consciousness. But that also may not be enough. If it were to just settle with an answer like "a computer", that might be just a fluke, that spark isn't there. That spark only comes when it fathoms that answer. The computer would need to ask the same questions we ask about our own existence.
There is no algorithm that would make a computer conscious in the sense that we are talking about when we say that we as humans are conscious. We might be able to devise machines whose function is to pretend to be conscious and to deceive ourselves and others, but all it would accomplish is to make it more difficult to guess whether or not we are interacting with a conscious subject - it would not actually change the fact that something is or is not conscious.
User avatar
DavidCooper
Member
Member
Posts: 1150
Joined: Wed Oct 27, 2010 4:53 pm
Location: Scotland

Re: where are the 1's and 0's?

Post by DavidCooper »

Gigasoft wrote:Although we may not be able to prove in absolute terms that we feel pain, it's possible for you to make a guess, by observing our behaviour and analyzing it in context of your own experiences with pain. You have an unique advantage when studying yourself and your relationship with pain - you are the experiencer of your pain, and there's no untrustworthy box to be concerned about.
It isn't that simple. When you think about whether pain is real, all you can do is get thinking mechanisms in the brain to look up the data for evidence, and the evidence is all data that may be fake. If you try to bypass that and stick a pin in yourself to check directly, what happens? While you're "feeling pain", can you genuinely be thinking about this in any abstract way at the same time, or can that only happen when switch away from experiencing the pain? Maybe you can do both at once, but there still has to be a mechanism for translating genuine knowledge of pain into data that can ultimately be exported in language form, and if the translation into data is done by a mechanism that doesn't have actual knowledge of pain itself, it cannot know that the pain is genuine, so the statement it generates about the pain being real may be false. If the mechanism doing the translation actually does know that the pain is real (by feeling the pain directly), what form must it have that it is able to perform such an abstract translation of what are in effect symbols representing the pain it is feeling while also being able to feel the pain and thereby know that the statement generated is actually true? Can you create a cause-and-effect, step by step model for this?
This, of course, assumes that you know for certain that your own pain is real. If I am talking to a real person like myself, then you should already have no doubt whatsoever that you do not have a habit of pretending to feel pain.
You certainly aren't pretending anything to yourself, but you may still be wrong in your belief that pain is real - you may simply be designed to have false beliefs about it. For what it's worth, I believe that pain is real, but I still have to consider the possibility that I'm wrong, and the mechanism for translating that feeling of pain into actual documentation of pain within the processing device without in the process losing the knowledge that it's true is a serious difficulty.
What you feel is as real as it gets, if there is to be any reasonable definition of reality at all.
It would be perfectly reasonable to think that the data in our heads documenting our own consciousness is false, but because we're programmed to believe the false data we find it hard to believe it's anything other than real.
Clearly, the only way for you to experience and know about an outside reality is through your perceptions. If you don't event trust yourself and your own perceptions to be real, then there is no reason to believe that anything else is real, and the question becomes meaningless.
The question is not meaningless even if consciousness is a fake phenomonon - there would still be something in existence generating all this data and creating some data which documents a fake phenomenon.

Even so, I still think consciousness must be real despite the problems pinning it down. There may be some kind of quantum trick involved in it where the entire mechanism for generating data can somehow know that the data it's generating is true, but it's going to involve a whole tangle of symbolic representations which somehow consciously understands itself, and that doesn't seem viable.
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming
SDS
Member
Member
Posts: 64
Joined: Fri Oct 23, 2009 8:45 am
Location: Cambridge, UK

Re: where are the 1's and 0's?

Post by SDS »

DavidCooper wrote:
bonch wrote:Do you think computers could ever be conscious the way humans are?
Are humans really conscious? Take a pain response to something sharp as an example: you feel something sharp and it hurts, so you are triggered into trying to eliminate the cause of the pain. A machine could be programmed to pretend to do the same...
I think with this quote you indicate both the biases in your thinking, and a slight side-step of the question/problem. The response you describe is closest, in a human context, to the autonomic nervous system. This uses a variety of communication and feedback systems within the body to cause a reaction based on external stimuli, or changing internal conditions.

Consciousness is a very different kettle of fish. I like the example of the colour red. I can make a machine which responds to the colour red. A couple of LDRs, some filters and transistors, and you can drive any output you like based on observing red colours. Consciousness is more complex:
  • We observe the colour red
  • We can ascribe a property of 'reddness' to an object, or situation, which is not merely a function of the light which is received by the eyes. This observation is internal, and not directly linked or necessary for any simple response. It is also contextual (think an observation of a coloured person blushing under strange lighting. There is very little obvious 'red' involved, despite our perception).
  • We are aware of the fact that we are doing this.
  • We can manipulate our own (and indirectly other peoples) thoughts on the matter of what reddness is, whether we have observed it, and the contextual importance of it.
  • We may, or may not, act on the property of redness. Our response may not be consistent or predictable, even to ourselves.
Consciousness is deeply introspective. It could be considered to be an emergent phenomenon, but it is generally agreed (see the research literature) be more than a case of stimulus-observe-respond. I would argue that multiple levels of introspection and indirection would be the first place to try and mimic consciousness, rather than an actionable feedback loop.
How can anything be more than the sum of its parts (and the geometrical arrangement of those parts)? More capabilities can emerge out of parts being put together in different ways, but there is no extra physical thing that emerges, so you might try to argue that an act of feeling pain (or any other sensation) could emerge out of something complex, but that cannot be so unless there is actually something there capable of feeling that pain/sensation.
This is the extreme case of a reductionist viewpoint. To use another analogy; we understand a great deal about the bonding and dynamics of water. We can simulate (to various degrees of accuracy) increasing ensembles of water molecules. This still does not explain why water feels wet, and other liquids (consider petrol, DMSO etc.) do not.

We can understand a macroscopic phenomenon as being constructed by the (normally fairly subtle) interplay of many simpler, lower level, phenomena. This does not mean that the macroscopic phenomenon would be expected merely by an understanding of the lower level. Hence, we have an emergent phenomenon.
User avatar
Venn
Posts: 22
Joined: Fri Sep 30, 2011 4:43 pm
Location: Under the mountain

Re: where are the 1's and 0's?

Post by Venn »

Wild guess indeed, I mean, I think it would be impossible to really tell. But ja, I don't believe that it could happen. Perhaps a synthetic consciousness, but not a true, bonafied consciousness. A very complex emulation of consciousness but never a true consciousness.

As for the whole pain thing, I don't have an answer for you. How is it that some derive pleasure from pain and others don't? One could say that consciousness is an illusion, as it is merely a series of chemical reactions and electrical interactions between cells within the grey matter. So, knowing this, shouldn't it be possible to create it using something like a programming language. This is largely what I meant by more than the sum of the parts. Each of us here started out as a single cell within our mother's uterus, and yet somehow we sit here exploring and debating metaphysics. Is it because we have denser brain matter? Perhaps, but by that logic we should be able to create a valid consciousness by creating an equivalent program. And yet here we state that it isn't going to happen, that we could only emulate. At the same time, if it looks like a pig, sounds like a pig and smells like a pig, be it not a pig? In a nutshell, I don't think it can be done because in order to create something we must first understand the nature of that something. So, to create a consciousness we must first know the exact nature of consciousness. Because we do not, we therefor cannot program it beyond our own delusion or believe of what consciousness is. Thus it would never quite turn out right.

Frankly, I think it is a moot point because until we truly understand what it is to be self-aware and conscious. We're trying to answer questions which haven't been answered in the some many thousands of years of human history and then apply it to computer programming no less. However, it is damn fun to debate and throw around. We all have our own opinions as to what consciousness is and is not and it just goes to prove that we don't actually understand it's true nature. Ah, to be human.

Edit: Did some browsing around on the internet and found this interesting article:
http://en.wikipedia.org/wiki/Blue_brain
At Iðavoll met the mighty gods,
Shrines and temples they timbered high;
Forges they set, and they smithied ore,
Tongs they wrought, and tools they fashioned.

~Verse 9 of Völuspá
bonch
Member
Member
Posts: 52
Joined: Thu Aug 18, 2011 11:19 pm

Re: where are the 1's and 0's?

Post by bonch »

SDS wrote:
DavidCooper wrote:
bonch wrote:Do you think computers could ever be conscious the way humans are?
Are humans really conscious? Take a pain response to something sharp as an example: you feel something sharp and it hurts, so you are triggered into trying to eliminate the cause of the pain. A machine could be programmed to pretend to do the same...
I think with this quote you indicate both the biases in your thinking, and a slight side-step of the question/problem. The response you describe is closest, in a human context, to the autonomic nervous system. This uses a variety of communication and feedback systems within the body to cause a reaction based on external stimuli, or changing internal conditions.

Consciousness is a very different kettle of fish. I like the example of the colour red. I can make a machine which responds to the colour red. A couple of LDRs, some filters and transistors, and you can drive any output you like based on observing red colours. Consciousness is more complex:
  • We observe the colour red
  • We can ascribe a property of 'reddness' to an object, or situation, which is not merely a function of the light which is received by the eyes. This observation is internal, and not directly linked or necessary for any simple response. It is also contextual (think an observation of a coloured person blushing under strange lighting. There is very little obvious 'red' involved, despite our perception).
  • We are aware of the fact that we are doing this.
  • We can manipulate our own (and indirectly other peoples) thoughts on the matter of what reddness is, whether we have observed it, and the contextual importance of it.
  • We may, or may not, act on the property of redness. Our response may not be consistent or predictable, even to ourselves.
Consciousness is deeply introspective. It could be considered to be an emergent phenomenon, but it is generally agreed (see the research literature) be more than a case of stimulus-observe-respond. I would argue that multiple levels of introspection and indirection would be the first place to try and mimic consciousness, rather than an actionable feedback loop.
How can anything be more than the sum of its parts (and the geometrical arrangement of those parts)? More capabilities can emerge out of parts being put together in different ways, but there is no extra physical thing that emerges, so you might try to argue that an act of feeling pain (or any other sensation) could emerge out of something complex, but that cannot be so unless there is actually something there capable of feeling that pain/sensation.
This is the extreme case of a reductionist viewpoint. To use another analogy; we understand a great deal about the bonding and dynamics of water. We can simulate (to various degrees of accuracy) increasing ensembles of water molecules. This still does not explain why water feels wet, and other liquids (consider petrol, DMSO etc.) do not.

We can understand a macroscopic phenomenon as being constructed by the (normally fairly subtle) interplay of many simpler, lower level, phenomena. This does not mean that the macroscopic phenomenon would be expected merely by an understanding of the lower level. Hence, we have an emergent phenomenon.
While I wouldn't go as far as DavidCooper in questioning whether pain is real, I don't see any other alternative to what you say is the "reductionist viewpoint". What are we if not physical systems? Why water "feels wet" I don't know, but ill happily and confidently say it's occurs due to physical reactions in my brain and body while I wait for the scientists to tell me exactly what's going on at the lower level. What is the other possibility? To propose otherwise you would have to believe that there is some kind of "soul" or dualism at work inside you. I don't have anything against the point of view, but if that's what you believe you have to take it on the level of faith and there's no point in engaging yourself in topics like this.
bonch
Member
Member
Posts: 52
Joined: Thu Aug 18, 2011 11:19 pm

Re: where are the 1's and 0's?

Post by bonch »

Venn wrote: Proof of consciousness would come from being able to formulate a viewpoint which does not directly stem from pre-programmed inputs or if it questions it's own existence without being programmed to do so.
Isn't that what machine learning is about? I'm not a master programmer (obviously :p) so I don't understand it in any depth, but I believe
that machines do learn and adapt. I've read about LISP algorithms that evolve and adapt solutions to dynamic inputs. I wouldn't call it "more than the sum of their parts", but I wouldn't call human learning more than the sum of it's parts either. I believe our brains respond to inputs from our senses in a way that is no different to computers in principle. It is a lot more sophisticated, absolutely, but I can't see any fundamental difference.
User avatar
Venn
Posts: 22
Joined: Fri Sep 30, 2011 4:43 pm
Location: Under the mountain

Re: where are the 1's and 0's?

Post by Venn »

The ability to learn is far different than consciousness and sentience. A computer can be programmed to learn an adapt, we know that, but it doesn't grant them sentience and consciousness.
At Iðavoll met the mighty gods,
Shrines and temples they timbered high;
Forges they set, and they smithied ore,
Tongs they wrought, and tools they fashioned.

~Verse 9 of Völuspá
bonch
Member
Member
Posts: 52
Joined: Thu Aug 18, 2011 11:19 pm

Re: where are the 1's and 0's?

Post by bonch »

DavidCooper wrote:
Venn wrote:The trouble is that it would probably have to be designed from the outside to believe that there is an "I" in it, rather than understanding itself to be program code running through a processor, and that would mean it had been designed to believe in consciousness from the start.
That's interesting. I never thought of it that way. Self awareness seems to me to be a given in any conscious being/entity/machine/whatever. The ability to distinguish it's own "field of interest" seems like an inevitability born of necessary. A human is self-interested because we have evolved that way. But I'm not sure that we could have evolved any different. A physical system has defined limits. I care about your pain but not in the way that I care about my pain. A computer program has it's own well defined limits too. I think self-consciousness would arise more or less simultaneously with consciousness.
Locked