My idea with human-like-AI

All off topic discussions go here. Everything from the funny thing your cat did to your favorite tv shows. Non-programming computer questions are ok too.
jnc100
Member
Member
Posts: 775
Joined: Mon Apr 09, 2007 12:10 pm
Location: London, UK
Contact:

Post by jnc100 »

The main problem with creating an artificial intelligence is actually defining what 'intelligence' means in the first place. The human brain is one implementation of intelligence (although some would say otherwise :wink:). This does not mean, however, that all attempts to create artificial intelligence should be modelled on the human brain. There are many other forms of intelligence if we define it as the ability to solve problems, e.g. the complex interactions of groups of people (http://en.wikipedia.org/wiki/The_Wisdom_of_Crowds) or the evolution of a species to overcome certain challenges.

For example,
MessiahAndrw wrote:You could have an neural network to aid in repairing computers. For example, trash an operating system in a vm, then run your neural network. Somehow, work out how much of the operating system is fixed in percentage. The neural network will use distributed computing (such Google Compute) to run millions on instances at once, and each instance will 30 minutes to do random actions in the virtual machine. After 30 minutes, all of the instances will be replaced by a newer generation, which will inherit the code of the instance with the higher "repair" percentage.
is describing a form of intelligence based on genetic algorithms rather than a neural net.

A neural net (in computing) is an assortment of objects which are similar and connect with others of the same type by a number of one or two-way, digital or analogue signals. Whilst each object makes a rather average approximation of a typical neurone (of which there is no such thing), the system as a whole does not contain the architectural design of the human brain as a whole.

Learning systems, on the other hand, are currently widely implemented. My mobile phone, through its 'predictive text' function learns which words I use most frequently and displays them at the top of the list when there is an ambiguity as to which word to use for a certain combination of key presses. At the moment, it seems to think I say 'GP' more often than 'is', but hey.

As regards pattern matching, different parts of the brain, associated with different senses perform their own pattern matching. There is currently a large amount of interest in the functioning of the visual cortex (at the back of the brain for some strange evolutionary reason). It is believed to work with the initial optic nerves/tracts from the retina reaching the outer layers of the cortex via a relay in the thalamus. The outer most layers recognise simple patterns (lines). The next recognise combinations of lines (i.e. corners) with the picture building up more and more the deeper in you go. It must be noted, however, that these layers are configured differently from one another, in other words, if you merely throw a bunch of neurones into a dish they will not form a intelligent brain without organising the correct relationships between each other.

So to go back to the original question, would it be possible to have a user interface which learns what the user wants?

Yes, in theory. You would need two systems:

1) identify the user's intentions at a particular point in time, understanding that there is often a variation in the ways a user can express a particular intention. There is a lot of research at the moment into various recognition technologies, e.g. speech recognition, face recognition, fingerprint recognition. I suppose a similar principle could be used to determine what the user actually wants to do when he clicks twice here, moves the mouse there and so on.

2) determine the optimum response to the user's intentions, which is also not always obvious. Various genetic programming techniques can help here, with the user (at least initially) reinforcing a good decision and downgrading a bad one (similar to a Bayesian spam filter).

So the technology is out there, albeit in early stages. I suppose the difficulty is getting them to work together and (at least for many corporations' point of view) making it financially viable.

In my own opinion, I do not think that attempting to emulate the human brain is a sensible way to implement intelligence in a computer program. And besides, think of the ethical implications if we actually managed to create a Data!

Regards,
John.
earlz
Member
Member
Posts: 1546
Joined: Thu Jul 07, 2005 11:00 pm
Contact:

Post by earlz »

meh, well I've given up on trying to make something really "learn" (though it really still interests me) Now I'm working on a thingy using evolution..

There are thousands of these little robots in the program, the robots brains are randomly generated with a logical opcode set. The kicker is that if they don't get food in so many "turns" then they will die and another randomly generated robot will take it's place.
I am giving the robots some pretty rich control over themselves and the enviroment, for instance, they can actually attack on another(and after one dies, the attacker can eat the dead victim) and they can mate(though some restrictions apply with that bit)

I'm really curious as to how this thing will turn out...

btw the original idea was by kmcguire..I'm just wanting to make my own! lol
User avatar
ucosty
Member
Member
Posts: 271
Joined: Tue Aug 08, 2006 7:43 am
Location: Sydney, Australia

Post by ucosty »

Just don't let them out on to the internet, or the whole world will implode :roll:
The cake is a lie | rackbits.com
User avatar
bubach
Member
Member
Posts: 1223
Joined: Sat Oct 23, 2004 11:00 pm
Location: Sweden
Contact:

Re: My idea with human-like-AI

Post by bubach »

I know it's an old thread but Ive been thinking more and more about AI lately. It's true that the brain has enormous amount of neurons and synapses. To fully understand the human brain we might have to wait hundreds or thousands of years.

But... I'm more interested in how the brains basic "algorithms" work then each individual neuron. There's so many sub-systems to take into account. Many of them we can discard as not being essential to our intelligence.

Currently a dog or monkey can be taught to do lots more (in a sense at least) then any AI.

For example - language, how it's connected to thought. Animals do not have complex languages, only the most basic sounds to indicate danger, fear, hunger and so on. (Very much like a newborn baby.) Yet animals set up goals and do "complex" tasks to archive them.

I've been thinking that the "universal" language for thought has to be emotions, but how to define them in a computer? I've come to the conclusion that everything we _ever_ experience must be stored as a unique emotion, somewhat bundled/categorized with similar ones previously experienced. Language would work like some sort of "tags" for the emotions that we want to convey.

The next question would be how some emotions can be thought of as good/bad/neutral. I think that it all depends on some sort of set list of goals we have, let me explain further...

Babies haven't experienced much at all, and possibly doesn't even have the knowledge to understand hunger as a bad thing. This is all just theories, but perhaps when the first experience of feeding takes care of the discomfort of hunger it's added to the list of primary goals (might also be programmed in genes as first objective). As soon as they have a goal not to feel hungry, and associate the emotion/feeling of hunger as something that goes against that goal - it's instantly marked as a bad experience/emotion.

Experiences that doesn't stand out, that we have all day when doing nothing - doesn't really correspond to any goals and therefore doesn't have any negative or positive feelings. As we grow older and experience more and more things, our list of goals get more and more complex, organized into long term, short term and so on. More complex goals can be connected to more distinct set of emotions and basically a more advanced thinking. With the help of language to set tags and categorize all these "neutral", bad and good emotions further, thinking goes abstract and complex.

When we "hear" our inner thoughts it's just because the emotions trigger the language areas/tags connected to them? I've got loads of other ideas, thoughts and even notes on different aspects of intelligence and brain function. Just for the sake of trying to "figure it out" :lol:


I'm rambling... Haven't even been active here for years, but it's the one place I know where discussions of this sort isn't viewed as total lunacy, hahaha..

Gimme your thoughts and ideas, I'm always interested in hearing new AI/brain theories! If only I had a couple of million, 10 clones of myself and lots of free time to try and test some concepts for new AI :twisted:
"Simplicity is the ultimate sophistication."
http://bos.asmhackers.net/ - GitHub
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: My idea with human-like-AI

Post by Solar »

berkus wrote:You probably don't have babies, but I can assure you they are programmed to suck milk before they realise what hunger is.
Excellent angle to point out why AI research is so difficult. As both a biologist and a father of two, I can tell you that the first feeding of a baby has very little to do with hunger. Actually, in most cases the mother doesn't even lactate yet at that time. (Babys routinely lose about 10% of their weight during the first days, and aren't even hungry right after birth.) It is about tactile feelings (skin, warmth, heartbeat - mother, different but the same that was felt for the last nine months, is still here). It's about satisfying the suckling reflex, which in turn triggers the reward system of the brain, comforting the newborn after the trauma of birth (and setting the field for the "real" feeding that is yet to come). It's also about triggering the lactate glands to start production (in combination with a number of hormones triggered by the birth process itself).

And a couple of other things.

Even simple things in real life are complex like that, and the simplest of mammals do it right out of raw instinct.

The average artificial "intelligence" would have either have to A) be told all those things to make the right decision - which is difficult because even human scientists are not aware of all the things involved and their relative importance; B) go through the AI equivalent of millenia of trial & error evolution to build the equivalent of "instinct", which is again difficult because to make the emulation some scientists have to set the proper "environmental pressure parameters" (see A)); or C) be told that "a newborn has to suckle" is "right" because the AI programmer "knows" it is right - which doesn't help a bit when it comes to improvising on a has-not-happened-before situation, the classic weakness of AIs.
There's research to prove (or disprove) that basic intelligence can be reproduced by merely copying neurons and their connections, and as far as I can see it turns out to be true.
I just recently read that "neuronal networks" have gone out of fashion in AI research recently. They were unable to evolve beyond the simplest of tasks. I blame B) above.
Every good solution is obvious once you've found it.
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Re: My idea with human-like-AI

Post by Combuster »

I also blame the fact that external intervention is needed to train a neural network - you need a form of metaprogramming for a neural network to decide if another neural network is fit - basically what we would do when making a plan before executing it (if ever).
For the fun of it, I would like to propose that conciousness would require a system that can theoretically go into metalevels indefinitely.

On another note, do they have any idea how many neurons are in a brain, and how many the average scientific network contains? I believe there's an order of magnitude difference between them.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: My idea with human-like-AI

Post by Solar »

Combuster wrote:On another note, do they have any idea how many neurons are in a brain, and how many the average scientific network contains? I believe there's an order of magnitude difference between them.
/me took a quick trip to Wikipedia.

2 * 10^10 neurons in the average human cortex - not counting cerebellum, diencephalon, brainstem, spinal cord etc.

Yes, they did build artificial neural networks in that order of magnitude.
Every good solution is obvious once you've found it.
User avatar
Zacariaz
Member
Member
Posts: 1069
Joined: Tue May 22, 2007 2:36 pm
Contact:

Re: My idea with human-like-AI

Post by Zacariaz »

When thinking back to when this thread was fist started, I do believe it was around that time that I learned about, what I was sure would revolutionise the world of AI.

The man in question was/is of course Jeff Hawkins. Lot's of videos can be found on youtube and such and further information can be obtained from the website: www.numenta.com

It all seemed so obvious... YES! That's the way to do it!

Now however, several years have passed, and not much has happened. Not that the project is at a standstill, but no big headlines.

The software behind it is free to use and I've tried, but I lag either understanding or talent.


so, the future of AI?

Some would say: Now wait just a minute, didn't you hear about Watson? Well, in my mind that has little to do with intelligence.


I don't know about all of you, but I am disappointed.
This was supposed to be a cool signature...
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re: My idea with human-like-AI

Post by Solar »

Zacariaz wrote:I don't know about all of you, but I am disappointed.
I am delighted. Given how little our moralic and ethical development, as a race, has kept up with the technological advances of, say, the last couple of millennia, I'd say we cannot even handle our own intelligence safely, let alone some artificial (super-)intelligence.
Every good solution is obvious once you've found it.
User avatar
Zacariaz
Member
Member
Posts: 1069
Joined: Tue May 22, 2007 2:36 pm
Contact:

Re: My idea with human-like-AI

Post by Zacariaz »

Solar wrote:
Zacariaz wrote:I don't know about all of you, but I am disappointed.
I am delighted. Given how little our moralic and ethical development, as a race, has kept up with the technological advances of, say, the last couple of millennia, I'd say we cannot even handle our own intelligence safely, let alone some artificial (super-)intelligence.
Hehe, I see what you mean, but i do think that our definition of intelligence may differ slightly.

When I think AI, to take an example:

I have this device which I can feed integers, preferably of an infinity size and then output a boolean value of it's own choice. The point then being that if I feed it, fx. exclusively prime numbers, at some point it will learn the connection and the output will of course reflect that.

The point being that I get this device to perform a task which humans can't do and even without telling it how.

It may not be intelligence in the normal sense of the word, but it is that aspect which intrigues me and I fail to see how safety would be an issue in this regard. ;)
This was supposed to be a cool signature...
User avatar
bewing
Member
Member
Posts: 1401
Joined: Wed Feb 07, 2007 1:45 pm
Location: Eugene, OR, US

Re: My idea with human-like-AI

Post by bewing »

Bah. This is all a bunch of BS. I would think that more people here would understand the concept behind a "universal computer" in the Turing sense. All universal computers are interchangeable. Universal computers are the hardware that run all algorithms. A neural net is just another form of universal computer. It cannot do anything that any other universal computer cannot do. It is just a fad. Understanding how the hardware works does not magically generate the algorithm or the software for you. Intelligence is SOFTWARE. You will never understand intelligence by messing with HARDWARE (ie. neural nets, transistors, real neurons, etc.). The "intelligence" algorithm can run on ANY universal computer. Try taking apart your laptop chip by chip, and then showing me where the "linux" is. Duh.
User avatar
Zacariaz
Member
Member
Posts: 1069
Joined: Tue May 22, 2007 2:36 pm
Contact:

Re: My idea with human-like-AI

Post by Zacariaz »

bewing wrote:Try taking apart your laptop chip by chip, and then showing me where the "linux" is. Duh.
I'd pick up the hard drive and point to that and if that's not enough, I'll point to the sectors where the kernel is stored. It's not really that hard.

We know a whole lot about how the brain works on the physical level, much more in fact than most people realise, but that does not explain how the brain works as a whole. Just as I don't claim to understand how linux works just because I know where it's stored, which components are needed for it to work or even know the binary data by heart.




And so on and so forth.
This was supposed to be a cool signature...
User avatar
NickJohnson
Member
Member
Posts: 1249
Joined: Tue Mar 24, 2009 8:11 pm
Location: Sunnyvale, California

Re: My idea with human-like-AI

Post by NickJohnson »

bewing wrote:Bah. This is all a bunch of BS. I would think that more people here would understand the concept behind a "universal computer" in the Turing sense. All universal computers are interchangeable. Universal computers are the hardware that run all algorithms. A neural net is just another form of universal computer. It cannot do anything that any other universal computer cannot do. It is just a fad. Understanding how the hardware works does not magically generate the algorithm or the software for you. Intelligence is SOFTWARE. You will never understand intelligence by messing with HARDWARE (ie. neural nets, transistors, real neurons, etc.). The "intelligence" algorithm can run on ANY universal computer. Try taking apart your laptop chip by chip, and then showing me where the "linux" is. Duh.
Err... the entire point of Turing universality is that all hardware can be simulated by software and vice versa. Therefore, yelling at people for working on what you call hardware instead of what you call software is ironically self-contradictory. Just because we know that there exists a computer (hardware-implemented or software-implemented) that has what we consider intelligence, because we are Turing-complete and have intelligence, does not mean that we have a computer that has what we consider intelligence. We're still trying to construct that, and neural nets are one of many ways in which we're trying to do it: it's just a technique.
User avatar
qw
Member
Member
Posts: 792
Joined: Mon Jan 26, 2009 2:48 am

Re: My idea with human-like-AI

Post by qw »

The only proven way for intelligence to arise, is by evolution. So perhaps we could redo evolution somewhat. Artificial evolution has been proven to work on a small scale.
earlz
Member
Member
Posts: 1546
Joined: Thu Jul 07, 2005 11:00 pm
Contact:

Re: My idea with human-like-AI

Post by earlz »

I'm usually optimistic, but with AI, all I've found is failures.

I think until we can somehow disassemble the brain and find out how it works and The Big Picture, there is no way we will create AI, and even then our hardware may be too slow to run the software at any usable speed. And until then, our only chance at AI is genetics and evolution. And of course, brute forcing an algorithm for intelligence will take an exponential amount of time.
Post Reply