FastForward to Artificial Intelligence
Here are a some speculations on the emerging world of artificial intelligence, compliments of the FastForward Posse. Their love is real, but I sometimes I'm not sure whether they are.
We must think carefully about what we want to use AI for. Consider the opening sentences from an article running in a recent Village Voice article
As American warfare has shifted from draftees to drones, science and the military in the United States have become inseparable. But some scientists are refusing to let their robots grow up to be killers.
I'm a proponent of developing "smart" military technology, but these scientists may have a point. If current robotic and AI technologies may eventually evolve into superintelligences who will make a go/no-go decision about the future of humanity, don't we want these technologies to start out as sweet and docile as possible?
What we need is a mathematical model for the "brute force" approach to AI and it's time-domain derivation. In what year might we expect a brute-force AI a cerebral neurology simulation to be developed on a given quantity of hardware? Here are some thoughts.
Read some Gene Wolfe, whose treatment of artificial intelligence in The Book of the Long Sun (vols 1 and 2), has generated much thoughtful discussion which has been archived here. If Wolfe's vision is realized, personalities of the rich and famous will find immortality and deification inside computers, artificial beings (chems) will mate and construct their offspring, while natural beings (bios) do it the old-fashioned way.
We already have the hardware necessary for AI. The computers are fast enough and will grow faster. The problem is fundamentally that we don't have a software implementation. There may be social aspects to this as well. AI suffers from the "nano" disease. There's a lot of computer science that is undeservingly self-categorized as "artificial intelligence". I think eventually we will see multiple ways of creating "intelligence" in software. Some of these will be quite alien to human modes of thought.
Some concrete predictions: within ten years we will have something that is
genuine low-level artificial intelligence (i.e., smarter than an ant *cough*)
and within twenty years we will have much smarter programs that can run on today's
PCs. The most useful AI will be programs that can sift through collections of
databases and come up with rational answers to poorly defined questions. e.g.,
given the information coming off of the news wire, make the optimal profit for
my company. The databases might not fit on a current day PC, but the decision
making process will.
Here's a classic Science fiction novel that tells the story of an Artificial Intelligence taking over. This book was recently recommended to us by an AI.
I don't look forward to artificial intelligence. For starters, I can't handle relying on "people" to do things the right way. You know, my way. Either people don't listen or they get hung up doing it "their" way. It's always a huge disappointment. So what happens when the machines take over? Well, assuming that the machines are here to serve us, one would expect them be very good listeners and do exactly as requested. Seems perfect. But rarely do I know what I really want and even when I do, I don't communicate it well enough. So AI machines doing exactly what I ask for would, invariably, never do it right. Even if they were programmed to keep inquiring until they knew exactly what I wanted, it would be so irritating, I would have to keep a baseball bat handy, so I could swing for the fences whenever one of 'em got too inquisitive. It just makes me sick. We're talking about this great future and possible immortality, and all I can think of is how far their little fake skulls will fly off some 36-oz wood.
Read some Greg Egan. Egan has written some of the definitive fiction about uploading human personality and about a distant future in which almost all intelligence (including human intelligence) is artificial.
Thanks to Mike Sargent, Chris Hall, Karl Hallowell, Ringleader Mike
Posted by Phil at September 15, 2003 05:49 AM | TrackBack