Brendan wrote:You're allowing your own bias and/or wishful thinking (and/or a desire for your fantasy machine to produce the same answer you would have) to destroy logic.
Consider the question "If a Foo is like a Bar and a Bar is definitely pink, what colour is a Foo?". A logical machine would assume that because a Foo and a Bar are similar the most likely answer is that Foo and a Bar are both pink.
A logical machine would, in the absense of any other information on this, determine that there was a higher chance of a Foo being pink than it would without the information about a Foo being like a Bar and a Bar being pink unless the information is coming from a source that is likely trying to mislead it, in which case it will determine that there's less chance of a Foo being pink rather than more, unless the information is comming from a cunning source that may engage in a double bluff, in which case it may not change any probability it has attached to the idea of a Foo being pink at all. As I said before, this is about AGI and not AGS.
Now consider the question "If AGI shares multiple characteristics with Humans (both a type of intelligent entity) and the first AGI was definitely designed and created, where did the first Human come from?". This is the same as the last question - a logical machine would assume that because AGI and Humans share multiple characteristics the most likely answer is that the first AGIs and the first humans were both designed and created.
If that is all the AGI system knows, then at that point it will determine that there is a possibility that the first human was designed and created. As it receives more data it will also determine that there's a possibility that humans evolved without any intelligent designer, but it won't rule out the possibility of there being an intelligent designer which dictated the entire process. In this particular case, so long as evolution looks like a possible mechanism for humans being created, it's so unknowable that there's no way to put a proper value to the probability either way because we can't measure how many human-like species were created by hidden designers and how many actually evolved: we only have access to one case and we don't know how it came into being.
Of course if AGI believes biased information, then you can tell the AGI that humans evolved, or that humans have always existed, or that humans don't exist (and are just AGI machines pretending to be "biological"), or whatever else you like, and the AGI will believe you. However, you're attempting to pretend that your AGI won't believe biased information.
AGS might believe you and put 100% in as a probability where a value less than 100 should be used, but AGI won't make such unacceptable jumps of assumption. Many people have NGS (natural general stupidity) and believe all manner of things they're told without checking, but a few have NGI and question everything. People with NGS also get stuck with their beliefs, so when they start to generate contradictions, they don't rethink everything from scratch, but take the lazy way out and just tolerate the contradictions instead while telling themselves that contradictions are okay. AGI will not tolerate contradictions, so as soon as a contradiction is generated it will know that there is a fault in the data and it will hunt it down. It may not be able to work out which part of the data is faulty, but it may be able to identify which parts could be wrong and it could then ask for more data related to those parts in an attempt to resolve the issue.
Pure bullshit. Why not create intelligence, then let it learn languages and linguistics the same way people do (and then ask your AGI machine how to get past the "linguists are too stupid to see that languages have everything to do with communication/IO and nothing to do with intelligence" barrier)?
Language is for communication, but it maps to thought and they are similarly constructed, though thoughts are networks of ideas while spoken language has to be linear, having lots of ways of following different branches of ideas sequentially. (It is possible that thoughts are stored as linear data too, and they are in my AGI system, but the structuring is different, removing all the mess of the natural language forms.) To get an understanding of how thinking works, language is a good starting point. It's possible to start without it too though, and if you do that you'll be building up from mathematics and reason. We have all the necessary maths and reasoning worked out already, but what happens when you want to fill the machine with knowledge presented to it through human language? Without solving all the linguistics problems (grammatical and semantic), you can't bridge the gap. A lot of thinking is done using high-level concepts without breaking things down to their fundamental components, so even if you are trying to build AGI without considering language at all, you're still going to be using most of the same concepts, and studying language is a shortcut to identifying them. If you were working with vision and ignoring language, you'd still be labelling lots of identifiable parts and then checking to see how they're arranged to see if there's a compound object made out of many of those parts which could match up with a concept that represents that kind of compound object, and when you add language to the system later on you will then assign a name to it to replace the unspeakable coding that's used in thought. Many of the things AGI will need to do involve simulation: it isn't enough just to work with concepts, but you have to be able to generate representations of things in a virtual space and imagine (simulate) interactions between them, or analyse alignments. It would be fully possible to develop AGI in such a way that you start with mathematics and reason, then add this simulation, then add machine vision, and only add language at the end of the process, but you'd be missing a trick, because what's really going to guide you in this is the analysis of your own thinking, asking yourself, "how do I work that out?" In that kind of study, you're working with thoughts which are already very close to language, and when you write notes about what you're working out, you do all of that through language too, translating the thoughts into language to record them. Studying thought is the way to make progress, and studying language is the best way to get a handle on what thought does: the aim is to see through language to the deep structures of the actual thoughts that lie below the surface.
A calculator gives accurate results quickly because it's not intelligent.
A calculator displays some components of intelligence. A reasoning program that can solve logic problems where all the complexities of language have been removed also displays some components of intelligence, but for it to solve real problems it needs a human to convert them into a form that it can handle. An AGI system will be able to do the whole task without the human cutting through the linguistics barrier for it every time.
So it will have the impossible ability to obtain biased information and construct "unbiased probabilities" from it (where the resulting "impossibly unbiased probabilities" are then applied to more biased information to obtain "impossibly unbiased information")?
During the war between the Russians and Mujahideen in Afghanistan, the Russians put out biassed news about it on Radio Moscow (which they broadcast around the world). Whenever they said they'd killed a hundred Mujahideen fighters, they'd actually killed ten. Whenever they said ten of their own troops had been killed, the real number was a hundred. When you understand the bias and the algorithms used to generate it, you can unpick them and get close to the truth. You don't just guess what the bias might be though: you look at independent information sources and try to identify patterns. The Mujahideen were applying the same scale of bias in the opposite direction, which allowed you to correct all their figures too, and the adjusted scores from both sides matched up very well. Sometimes there was a BBC journalist with a Mujahideen group too who was providing unbiassed data on the number of deaths that had actually taken place in an incident, and this further confirmed the bias algorithms that were being applied by both sides.
There are 4 people in the same room. You ask all 4 people what the temperature is inside the room. 3 of the people collude and deliberately give you the same false answer ("very cold"). One person tells you the truth ("nice and warm"). You don't know that 3 people have colluded - you only know that you've got 3 answers that are the same and one that isn't. Which answer do you believe?
You apply probabilities to it and attempt to work out why there is a mismatch in the data. AGI is not a belief machine like AGS, but a probability machine. If the three people are smirking at each other, that increases the probability that they are lying. If they are all shivering, you determine that it's likely that they are telling the truth, or that those three may have a fever.
There are 4 people in the same room, and every day you ask them what the temperature is inside the room. 3 of the people have colluded and decided to always lie and always tell you an answer that is 20 degrees colder than it actually is; and their answers are always very close together (even when you know it's impossible for them to have talked to each other that day). One person is always telling you the truth, so their answer is always different from everyone else's. You don't know about the collusion. How do you determine your "probabilities based on source reliability" to ensure that you're not believing the liars?
If you know nothing else about them and are incapable of recognising any signs that they are lying, then you would put a high probability on them being the ones who are right and a high probability on the fourth person having an unusual physiology. If you have other data about these people in other situations though and know that there aren't any such mismatches in other situations, you can determine that it's unlikely that there's anything physiologically unusual about the fourth person (unless there's something unique about the room which might trigger it). You would then put a high probability on there being some bad information being provided, and the probability as to whether the three are lying or the one is lying would need to be calculated by the proportion of other similar cases favouring the group or individual as the prime suspect. You would then try to find an alternative way to measure the temperature in the room so that you can resolve the question. Note that I used the word "you" in that even though I was describing what the AGI would do, but there's a good reason for that: the AGI system would do the same things as an intelligent person would do in its attempt to resolve the mystery, and when it's starved of other data, it will make the same initial conclusion that the three people are more likely to be telling the truth because their data is better matched, but crucially it will be right because it is assigning the correct probability to this. An AGS in the same situation would be wrong because it would believe the three people and not assign the correct probability - an AGS belief system applies 100% probabilities even though the matter is not resolved with certainty. This reveals a lot about how some people think, because there are a lot of NGS belief systems out there which lock into unjustifiable beliefs instead of keeping their minds open. There are others though who are closer to NGI who refuse to accept any certainty at all, even when a proof is possible (under a set of rules which are taken to be true: if the rules are true, then the conclusion is certain).
Will it also fly faster than a speeding bullet, and make hamburgers appear out of thin air on request?
This is just more "it doesn't exist in practice, therefore there's no practical limits to my wishful thinking" nonsense.
If I say it will be able to do things that are fully possible, why are you extending that into making out I'm saying it will be able to do things that are impossible? You're applying NGS reasoning.
No, I'm saying that even if your pipe-dream AGI nonsense was a proven reality, it wouldn't decrease the number or severity of wars and would actually increase the number and/or severity of wars.
That's a possibility, but it will cause fewer wars than bad AGI, and bad AGI's only going to be prevented by using good AGI.
This reminds me of a guy that claimed his OS would have AI that would auto-transform software into something compatible with his OS. I bet he feels silly now that it's 5 or 6 years further down the track.
Things take time, but I know what I'm building and I know what it will be able to do, so if you think I look silly at the moment, that situation isn't going to last. I hadn't reckoned on the health problems that I've had to battle against over the last few years, but I'm back to working at full speed at the moment.
If it's AGI (and not AGS) then it's capable of finding a solution that it was not given (e.g. you give it a
false dilemma and it rejects the given solutions and "invents" its own solution). For example, if you ask it "Is three multiplied by two equal to 4 or 8?" it might say "Neither, three multiplied by two equals 6". For example, you ask it "Where did the first humans come from, were they created by one or more God/s or evolved from simpler life forms?" and it might say "Neither, they came from ....."
That's mostly right: it might find something in the data that we've all missed which reveals an answer that we were created by a child outside of the universe playing with a universe creation game, but again it would apply probabilities to that which show that this is uncertain because that data might be deliberately misleading.