For example,
is describing a form of intelligence based on genetic algorithms rather than a neural net.MessiahAndrw wrote:You could have an neural network to aid in repairing computers. For example, trash an operating system in a vm, then run your neural network. Somehow, work out how much of the operating system is fixed in percentage. The neural network will use distributed computing (such Google Compute) to run millions on instances at once, and each instance will 30 minutes to do random actions in the virtual machine. After 30 minutes, all of the instances will be replaced by a newer generation, which will inherit the code of the instance with the higher "repair" percentage.
A neural net (in computing) is an assortment of objects which are similar and connect with others of the same type by a number of one or two-way, digital or analogue signals. Whilst each object makes a rather average approximation of a typical neurone (of which there is no such thing), the system as a whole does not contain the architectural design of the human brain as a whole.
Learning systems, on the other hand, are currently widely implemented. My mobile phone, through its 'predictive text' function learns which words I use most frequently and displays them at the top of the list when there is an ambiguity as to which word to use for a certain combination of key presses. At the moment, it seems to think I say 'GP' more often than 'is', but hey.
As regards pattern matching, different parts of the brain, associated with different senses perform their own pattern matching. There is currently a large amount of interest in the functioning of the visual cortex (at the back of the brain for some strange evolutionary reason). It is believed to work with the initial optic nerves/tracts from the retina reaching the outer layers of the cortex via a relay in the thalamus. The outer most layers recognise simple patterns (lines). The next recognise combinations of lines (i.e. corners) with the picture building up more and more the deeper in you go. It must be noted, however, that these layers are configured differently from one another, in other words, if you merely throw a bunch of neurones into a dish they will not form a intelligent brain without organising the correct relationships between each other.
So to go back to the original question, would it be possible to have a user interface which learns what the user wants?
Yes, in theory. You would need two systems:
1) identify the user's intentions at a particular point in time, understanding that there is often a variation in the ways a user can express a particular intention. There is a lot of research at the moment into various recognition technologies, e.g. speech recognition, face recognition, fingerprint recognition. I suppose a similar principle could be used to determine what the user actually wants to do when he clicks twice here, moves the mouse there and so on.
2) determine the optimum response to the user's intentions, which is also not always obvious. Various genetic programming techniques can help here, with the user (at least initially) reinforcing a good decision and downgrading a bad one (similar to a Bayesian spam filter).
So the technology is out there, albeit in early stages. I suppose the difficulty is getting them to work together and (at least for many corporations' point of view) making it financially viable.
In my own opinion, I do not think that attempting to emulate the human brain is a sensible way to implement intelligence in a computer program. And besides, think of the ethical implications if we actually managed to create a Data!
Regards,
John.