September 15, 2003

Open the Pod Bay Door, Hal

FastForward to Artificial Intelligence

Here are a some speculations on the emerging world of artificial intelligence, compliments of the FastForward Posse. Their love is real, but I sometimes I'm not sure whether they are.

We must think carefully about what we want to use AI for. Consider the opening sentences from an article running in a recent Village Voice article

As American warfare has shifted from draftees to drones, science and the military in the United States have become inseparable. But some scientists are refusing to let their robots grow up to be killers.

I'm a proponent of developing "smart" military technology, but these scientists may have a point. If current robotic and AI technologies may eventually evolve into superintelligences who will make a go/no-go decision about the future of humanity, don't we want these technologies to start out as sweet and docile as possible?

What we need is a mathematical model for the "brute force" approach to AI and it's time-domain derivation. In what year might we expect a brute-force AI — a cerebral neurology simulation — to be developed on a given quantity of hardware? Here are some thoughts.

Read some Gene Wolfe, whose treatment of artificial intelligence in The Book of the Long Sun (vols 1 and 2), has generated much thoughtful discussion which has been archived here. If Wolfe's vision is realized, personalities of the rich and famous will find immortality and deification inside computers, artificial beings (chems) will mate and construct their offspring, while natural beings (bios) do it the old-fashioned way.

We already have the hardware necessary for AI. The computers are fast enough and will grow faster. The problem is fundamentally that we don't have a software implementation. There may be social aspects to this as well. AI suffers from the "nano" disease. There's a lot of computer science that is undeservingly self-categorized as "artificial intelligence". I think eventually we will see multiple ways of creating "intelligence" in software. Some of these will be quite alien to human modes of thought.

Some concrete predictions: within ten years we will have something that is genuine low-level artificial intelligence (i.e., smarter than an ant *cough*) and within twenty years we will have much smarter programs that can run on today's PCs. The most useful AI will be programs that can sift through collections of databases and come up with rational answers to poorly defined questions. e.g., given the information coming off of the news wire, make the optimal profit for my company. The databases might not fit on a current day PC, but the decision making process will.

Here's a classic Science fiction novel that tells the story of an Artificial Intelligence taking over. This book was recently recommended to us by an AI.

I don't look forward to artificial intelligence. For starters, I can't handle relying on "people" to do things the right way. You know, my way. Either people don't listen or they get hung up doing it "their" way. It's always a huge disappointment. So what happens when the machines take over? Well, assuming that the machines are here to serve us, one would expect them be very good listeners and do exactly as requested. Seems perfect. But rarely do I know what I really want and even when I do, I don't communicate it well enough. So AI machines doing exactly what I ask for would, invariably, never do it right. Even if they were programmed to keep inquiring until they knew exactly what I wanted, it would be so irritating, I would have to keep a baseball bat handy, so I could swing for the fences whenever one of 'em got too inquisitive. It just makes me sick. We're talking about this great future and possible immortality, and all I can think of is how far their little fake skulls will fly off some 36-oz wood.

Talk with some AI's for yourself. Get your own book recommendations. Our favorites include Jabberwacky, Alice, and McGonz. Plus Ramona, of course.

Read some Greg Egan. Egan has written some of the definitive fiction about uploading human personality and about a distant future in which almost all intelligence (including human intelligence) is artificial.

Thanks to Mike Sargent, Chris Hall, Karl Hallowell, Ringleader Mike


Posted by Phil at September 15, 2003 05:49 AM | TrackBack

> I don't look forward to artificial intelligence. For starters, I can't handle relying on "people" to do things the right way.

I've always thought that we wouldn't be satisfied even with human-level AI for just this reason. We put up with other real humans because what choice do we have? But we'd never put up with the same from machines. Good AI would just lead us to expect more, and result in extreme frustration.

Some of us -- dictators, absolute monarchs -- are in a position to look at people as appliances. They're always screaming about being surrounded by incompetents and traitors. They never seem to be happy. I suspect that AI servants will put us all in the same spot.

Posted by: Bob Hawkins at September 15, 2003 08:36 PM

[This was originally posted with the "Happily Ever After" item, but I thought it apropos here too.]

Not sure I get the whole "Singularity", 'runaway AI becoming a god' thing (other than as a SciFi artifice). Artificial intelligence and artificial *consciousness* are a far cry from each other. Perhaps it would be helpful to think in terms of artificial 'idiot savant' -- so-called AIs will be brilliant (far in excess of human brilliance) in the areas they are programmed to be brilliant (and dumb as a post everywhere else). And it doesn't take an artificially-enhanced genius to realize that you don't give the AI-designing AI the tools to actually build it's own designs.

(Besides, the EEs working on such projects surely know about runaway feedback loops and how to prevent them...)

- Eric.

Posted by: Eric S. at September 16, 2003 08:08 AM

Please have a look at my website.

Posted by: Johny Hobson at August 2, 2004 01:47 AM
Post a comment

Remember personal info?