December 29, 2003



Superhuman Intelligence

Steven Den Beste has a long and well-though-out piece on how the Internet may be evolving into a superintelligent hive mind or, more properly, an environment conducive to the development of any number of such minds.

Some of the voices contributing to that cacophony will be more profound than others. With more people online and more bandwidth available, more and more hive minds will appear, and that will increase the chance that a few will transcend the norm by greater and greater amounts.

The emergent result may well be that some will exhibit behavior indicating intelligence at a level beyond that of individual humans, capable of "thinking" thoughts no single human could conceive of. Even with industrial-level technology, that's already happened. Science, in particular, is such a thing, as is modern engineering. Engineering at a primitive level has been with us since the creation of the first stone tools. But science as we now know it is very recent, only going back about 500 years (though one can identify predecessors extending back millennia before that).

A while back, I e-mailed Steven to ask whether he has any thoughts on the Technology Singularity. To my surprise, he replied that he had never heard of it, and gave the impression that he wasn't terribly interested in the subject. From reading this latest piece, I understand better why that would be the case. Den Beste posits that true intelligence may be analog, not digital, and that—because of initial encoding errors compounded by the "butterfly effect"—may never be reliably encoded in a digital environment.

If that's the case, then no amount of digital hardware, no matter how fast, parallel or well connected, can ever really be intelligent in the way that we are, with the degree of capability and versatility we have. I cannot say for certain that's the case, but I have a strong suspicion that it is. There will eventually be a computer system which can beat any human at chess. It could be built now, except that no one cares to spend the money. But that system won't also be able to drive a car, write poetry, laugh at a joke, watch a movie and then summarize it later, or do all the other kinds of things that human chess grandmasters can do in addition to playing chess.

I'm not sure I entirely understand this objection. If we were eventually to upload a human brain via advanced scanning technology and run it as an emulation, it seems to me that the initial errors an butterfly-effect compounding would impact the processes running on the emulation, not whether the emulation worked. It doesn't seem to me that it's a question of whether the emulation would be a functionaing brain, just whether it would be the same brain. In other words, if it were my brain that were uploaded, the question wouldn't be whether the emulation is capable of laughing at a joke. The question would be whether the emulation and I find the same jokes funny. Initial errors and the butterfly effect might soon see to it that the emulation and I are distinct brains with distinct personalities—but I don't see how these effects would prevent the emulation from running the same kinds of processes (that is, think thoughts of the same level of sophistication) as my own brain is capable.

If a human brain can be uploaded and can function as well as (however differently from) its original, then strong AI has been achieved and a door is opened to a very different kind of superintelligence.

Posted by Phil at December 29, 2003 09:51 AM | TrackBack
Comments

Could be, but it wouldn't be my brain that had been uploaded.

And quite possibly, it wouldn't be as functional a brain, i.e. the emulation will be a failure.

Posted by: Greg D at December 29, 2003 06:05 PM

The problem here is that it's not just the hardware that's analog. It's the process. How do you digitally encode a "definite maybe"? In the human brain, much of what makes us tick is pretty indeterminate and it is the ability to shift a little bit one way or another for no apparent reason that makes us adapt so flexibly to changing situations. Something digital would require discrete guidelines for acting, even attempting to act like us. A computer might learn to give the appearance of personality, just as it can learn to play chess. But until it can make a move in chess or make a comment in a Turing test and have to confess later that it sincerely has no idea why it did what it did, it won't be there. Analog provides room for a shift so subtle that though it changes an outcome even though it's virtually undetectable. Can digital do that?

Posted by: Geoffrey Barto at December 30, 2003 03:25 AM

I think these concerns are overblown. Digital processes already do a good job of emulating analogue and can even emulate (with considerable inefficiency) quantum processes. You will probably need to introduce sources of randomness (entropy) in order to make the process somewhat indeterminant, but that's a solved problem in the digital realm.

Let's consider a thought experiment. Suppose NASA secretly put in a world-wide system of Laser Weather Control Satellites (TM) and set things up so that the weather followed exactly a decent digital, deterministic computer model. How would you know that this had occured? Unless you happen to stumble on a close enough computer model and happen to use most of the same data that the satellites are using, then it'll look just like "normal" weather perhaps with some slight energy sources and sinks (though presumably they would be hard to spot given that NASA probably has better sensors than you do).

Instead, you probably wouldn't have a clue until you stumble across the right model and have the right data. Then you might be able to predict the weather years or decades into the future. However, if NASA inserts some low levels of randomness (say with respect to how things are rounded), then even that clue would disappear.

My point is that if your goal is to emulate a human mind, then that's a far less difficult problem than duplicating exactly the thought processes of the mind. The mind isn't defined by what thought it happens to think at a given instance. For example, if there were a way to duplicate exactly a person down to the electrical signals in the brain, the copy would diverge from the original just due to the effect of cosmic rays, thermal noise, etc. But both people would clearly think alike, and it would be impossible to figure out who was the copy from analyzing thought processes alone (unless both knew accurately who was the copy).

My point is that there's nothing so magical about the hunan mind, analogue processes, etc that they can't be emulated with a digital process. The problem instead is that there's a tremendous amount of information that needs to be copied and complicated connections and relations that would need to be simulated. But I see nothing inherently "impossible" about using a much smaller and faster version of current computer technology to emulate a human mind.

Posted by: Karl Hallowell at December 30, 2003 06:59 AM

Geoffrey Barto, encoding a "definate maybe" is very easy. On a scale from 0 to 1, it's a .6, give or take a little.

Digital is only 1 and 0 at the lowest level, but that doesn't extend to higher levels any more then this message that I'm writing to you and transmitting to you digitally consists only of ones and zeros to you. (It does to the computer, but the computer has higher levels of meaning it knows how to impose on those numbers.)

The problem is not encoding some given fact given some encoding scheme, it's not even fuzzy knowlege per se. It's partially the problem of creating something as flexible as the human brain... and it's partially the frustration of not really being able to point at a single problem and saying that's it; if we could solve that we'd have human intelligence.

I think the biggest problem with AI is not that we don't have ideas, but that we don't really understand why none of them are producing AI. Means we have little more then vague hints to work with.

Posted by: Jeremy Bowers at December 30, 2003 08:16 AM

Good points from Karl and Jeremy. Karl hits it right on the money with this thought:

My point is that there's nothing so magical about the hunan mind, analogue processes, etc that they can't be emulated with a digital process.

Even if there were some magic in the brain or its analog processes, they couldn't elude encoding at the quantum level. A human being can be in, at the most, one of 10 [10[45]] quantum states. (Sorry, that's 10 to the 10th to the 45th -- I can't get the comments utility to take the SUP tag.) That's obviously a big number, much bigger than anything our current computers can handle. But it's also finite. *

Even if we can't make true machine intelligence happen by any other means (and I think we probably can), one day we'll have computers sophisticated enough to emulate a human brain at the quantum level. The math for determining each of the individual quantum states is no great shakes. But as Jeremy pointed out, it's not those (or any other) set of mathematical relationships that determine intelligence: intelligence emerges from aggregates of such relationships which encode higher meanings.

* Interestingly, it will probably be quantum computers that eventually allow us to process at that level.

Posted by: Phil at December 30, 2003 09:09 AM

1435 Get your online poker fix at http://www.onlinepoker-dot.com

Posted by: online poker at August 15, 2004 05:51 PM
Post a comment









Remember personal info?