September 09, 2003



Seven Questions with Michael Anissimov

Michael Anissimov is a director with the Immortality Institute and our special guest this week. He will be featured in this week's Speaking of the Future column on Thursday.


1. The present is the future relative to the past. What's the best thing about living here in the future?

Today, many humans in developed countries have great ability to create positive change in the state of the world. The Internet allows us to get our ideas out to thousands, if not tens or hundreds of thousands of people who are interested in them. The scientific and technical knowledge possessed by an average intellectual of the 21st century is massive in comparison to the thinkers of any other era. However, the underlying neurology of our human learning and intellect has not changed appreciably in 50,000 years. We’re lucky to live in a time where breaking this upper ceiling on intelligence may finally be possible. If humanity survives the risks associated with technological development, we may live to see a long era of extreme life extension, superhuman intelligence, uploading, full-scale space colonization, and sophisticated molecular manufacturing before the end of the century. The best thing about living in the future of today is the ability to forsee these potential advances and take actions to increase the likelihood of their arrival.

2. What's the biggest disappointment?

It’s hard to be “disappointed” about anything in particular; a lot of things are disappointing, but everything is just the way it is, so it’s useless to complain unless you’re taking concrete action to influence the future positively. I view technology as morally neutral – technology has the capacity to amplify the actions of the agent using it, which can be either good or bad. If I were forced to name something, however, I would point to the lack of cognitive science and evolutionary psychology awareness in the field currently called “AI”. How do they expect to build intelligent machines without any knowledge of the intelligent machines that already exist?

3. What future development that you consider likely (or inevitable) do you look forward to with the most anticipation?

The creation of benevolent transhuman intelligence. It’s impossible to set upper bounds on how good this might be for humanity. At the very least, a benevolent superintelligence would likely possess strong nanotechnology and deep knowledge of general psychology – by “general” I mean psychologies of human beings, human-equivalent AIs, transhumans, superintelligences, and everything in between. In theory, this would allow disease, pain, violence, accidents, poverty, and the most subtle of human discomforts to be eliminated. If we wanted certain types of discomfort just for the excitement of it, I’m sure that could be arranged, too. Of course, seeing any of this as plausible requires the viewpoint that faster and smarter-than-human intelligence is physically possible, and that transhuman AI could self-improve to a superintelligent state relatively fast from the human perspective.

4. Assuming you die at age 100, what will be the biggest difference be between the world you were born into and the world you leave?

If I’m still alive at 100, the world is likely to be massively different than it is today. Most futurists in the 80s didn’t forsee the massive explosion of internet and computer use in the 90s, and very few futurists from the 70s did. As the increments between surprising advances become more compressed, our ability to predict the future very far in advance will decline. When the first transhuman intelligence is created and launches itself into recursive self-improvement, a fundamental discontinuity is likely to occur, the likes of which I can’t even begin to predict. The difference between now and the post-Singularity era might even exceed the dissimilarities between the present day and the beginning of the known universe.

5. What future development that you consider likely (or inevitable) do you dread the most?

The creation of amoral transhuman intelligence, or any sort of self-improving intelligence indifferent to the welfare of human beings. There would be nowhere to run, nowhere to hide, if this intelligence got the idea it had to rearrange local matter to suit its goals. Whether this intelligence would initially arise in the center of the moon or in my basement would make no difference; a transhuman intelligence would have plenty of brainpower, ingenuity, and speed to find its way around these petty obstacles. All we can do is either hope that all sufficiently powerful intelligences automatically become altruistic, or that show-stopping bottlenecks exist on the improvement curve just above human-equivalency. Both of these hopes are incredibly unlikely to be true.

6. Assuming you have the ability to determine (or at least influence) the future, what future development that you consider unlikely (or are uncertain about) would you most like to help bring about?

The creation of benevolent transhuman intelligence, of course! At this point I’m fairly pessimistic about our likelihood of survival, but if enough people decide to care, humanity may have a fighting chance. I certainly hope that doesn’t come across as Apocalyptic.

7. Why is it that in the year 2003 I still don't have a flying car? When do you think I'll be able to get one?

Not until we have a suitable safety net, I hope. Burning hunks of metal falling from the sky doesn’t help much with life extension!


What's the deal with these Seven Questions?

Posted by Phil at September 9, 2003 10:07 AM | TrackBack
Comments
Post a comment









Remember personal info?