Speaking of the Future with Michael Anissimov
The title I've chosen for today's interview would be a hard one to support on any day, but on a piece dated September 11, it seems particularly audacious. I think some folks will read this and assume I'm being ironic. And if I'm not being ironic, then surely some will want to drop me a line to help set me straight.
Happily ever after is a toy for children, after all, and possibly the delusional. It isn't a notion that adults in their right minds bother with.
If you want to read a good hard-nosed rebuke to the fairy-tale notion of happily ever after, no one delivers it better than Anne Sexton in her wonderful poetic treatment of Cinderella:
You always read about it:
the plumber with the twelve children
who wins the Irish Sweepstakes.
From toilets to riches.
That story.Or the nursemaid,
some luscious sweet from Denmark
who captures the oldest son's heart.
from diapers to Dior.
That story.Or a milkman who serves the wealthy,
eggs, cream, butter, yogurt, milk,
the white truck like an ambulance
who goes into real estate
and makes a pile.
From homogenized to martinis at lunch.Or the charwoman
who is on the bus when it cracks up
and collects enough from the insurance.
From mops to Bonwit Teller.
That story.After this prelude, Sexton lays out a devastating retelling of the Cinderella story. This is not Disney or Rogers and Hammerstein pablum, but rather the Brothers Grimm version, complete with body parts lopped off and eyes gouged out. I'm not kidding. Read it.
Sexton wraps up her telling of the story with this assessment:
Cinderella and the prince
lived, they say, happily ever after,
like two dolls in a museum case
never bothered by diapers or dust,
never arguing over the timing of an egg,
never telling the same story twice,
never getting a middle-aged spread,
their darling smiles pasted on for eternity.
Regular Bobbsey Twins.
That story.That story.
Sexton doesn't need to say anything to discredit the idea of happily ever after; its illegitimacy is assumed. We can toy with this nonsense if we like, so long as we remember that it's fantasy. The plumber who wins the Irish sweepstakes is somebody else. He's not us, just as Cinderella and the Prince are not us. They live in a world where things work out not only well, but better than hoped. Theirs is a happiness that can't be touched by diapers or dust, or by a middle-aged spread. This is the happiness of Pollyana or Candide, and it has no place in our world.
Anne Sexton lashes out at the fairy tale ending for reasons that are more personal, but no less intense, than those that motivated Voltaire to take on the notion of "the best of all possible worlds." While Voltaire juxtaposed the facile equivocation of Philosophical Optimism against the tragedy of the Lisbon earthquake, Sexton contrasts the rosy images of human happiness conjured by fantasy (and often heavily reinforced by society) with the banality, decay, and, all too often, despair of everyday life.
In an introduction to an earlier interview in this series, I wrote about what I call serious optimism. Serious optimism does not begin with metaphysical precepts, nor with expectations of what human happiness should be. It begins with realistic, grounded extrapolations of the possible. Guiding us to positive outcomes, some of which we have always hoped for, some that we have never even imagined, serious optimism can serve as an alternative both to the classical cynicism that became Voltaire's legacy and to the modern/postmodern despair that eventually led Sexton to commit suicide. Technology, not philosophy, is the substrate of serious optimism; however, there is a philosophy that informs and enriches it.
That philosophy is called transhumanism. Transhumanism provides glimpses of a new happily ever after, which is neither an obstinate recasting of the world around us into the "best of all possible worlds" nor a tired retelling of that story. Instead, this new happily ever after is predicated on the idea that the future can be will be fundamentally different from the past and the present, that our ability to choose and define our own happiness is expanding exponentially, that the human adventure is only beginning.
Michael Anissimov is a transhumanist. He's an advocate of the ethical expansion of the human experience into new realms, and a serious scholar of the risks and pitfalls the coming age may bring. Michael is a Director with the Immortality Institute, a transhumanist organization dedicated to facilitating extreme life extension. In the interview that follows, Michael talks about living forever, Super Intelligences, and why it's so hard to say what we might be doing for fun a billion years hence.
Michael, the Immortality Institute states on its Website that its mission is to conquer the blight of involuntary death. Isnt involuntary death a vital part of the evolutionary processes that brought us where we are? How can you describe it as a blight?
Involuntary death is a cornerstone of biological evolution, but that fact does not make it a good thing, in the same way that someone giving birth to you does not necessarily mean they are a good person. Unfortunately, there are mothers out there that neglect or harm their children. Evolution, the process that produced humanity, possesses only one goal; create gene machines maximally capable of producing copies of themselves. In retrospect, this is the only way complex structures such as life could possibly arise in an unintelligent universe. But this goal often comes into conflict with human interests, causing death, suffering, and short lifespans. The past progress of humanity has been a history of shattering evolutionary constraints; our lifespans today are two to three times what they were thousands of years ago, modern medicine has rendered natural selection moot, and global literacy has enhanced mans innate ability to process and distribute information. Immortalists suggest taking the next step, eliminating unwelcome instances of death, and replacing the careless and cruel process of evolution with compassionate, human-guided biotechnological and nanotechnological improvements.
I recently asked Aubrey de Grey whether his research would lead us towards living forever. He responded as follows: Well, clearly there will always be the risk of death from causes that have nothing to do with aging, so forever seems unlikely. Would you agree with Aubreys assessment or do you believe that living forever (as the word immortality implies) is achievable?
I agree with Aubrey completely. Its hard to say whether the laws of physics will ever allow true immortality. Immortality is probably not something that can be achieved with 100% confidence making immortalism a philosophy of life rather than an engineering goal. It should also be said that immortalism doesn't solely focus on removing aging, but all causes of undesired death. Aubreys research, if it comes to fruition, would only solve a piece of the problem. Part of the reason weve called ourselves The Immortality Institute is to challenge life extensionists to go beyond shy projections of mere hundreds of years, and to start exploring methods and arguments behind billion-year lifespans, trillion-year lifespans, and longer. The latter leads to fundamentally different philosophical positions and scientific interests. (For example, many immortalists tend to focus more on nanotechnology and Artificial Intelligence as opposed to exclusively biotechnology.)
The Immortality Institute is apparently interested in a wide range of topics: everything from very practical advice for promoting health/life extension to cryonics to the uploading of human personality to new substrates. Would you identify these as the three essential steps or phases towards extreme life extension? Do they occur in the order I listed them? Are all three necessary?
First of all, Id like to say that my personal opinions are not meant to represent the overall opinion of Immortality Institute members; our goal is to promote whichever ideas our aggregate considers most important. That said, I dont consider those three approaches to be essential. I want all of them to be available as soon as possible, and what already is available to be improved. However, both myself and Immortality Institute founder Bruce J. Klein agree that mind uploading is ultimately the most robust and effective strategy for pursuing extreme life extension. The complexity of mind uploading, however, suggests it will take transhuman or superhuman intelligence to develop properly. This makes the creation of benevolent transhuman intelligence a big deal for certain immortalists. If this goal cannot be reached until 2030 or so, as Ray Kurzweil suggests, it would certainly be advisable for older immortalists to stay healthy until then. The more conservative your estimate for the arrival of mind uploading, the more effort you should be putting towards 1) increasing the likelihood that mind uploading and benevolent superintelligence will eventually come about, and 2) trying to live to see that day.
Recently you have done some extensive writing on artificial intelligence. Why do the subjects of life extension and artificial intelligence seem to be so closely linked?
The central issue here is the possible creation of smarter-than-human intelligence, and that intelligences creation of still smarter intelligence, leading to an open-ended positive feedback cycle known as the Singularity. Although smarter-than-human intelligence could be created by a number of methods, such as genetically engineered humans, cybernetically enhanced humans, or Brain-Computer Interfaces, it currently seems that Artificial Intelligence is in the lead. Artificial Intelligence research is currently legal and acceptable, placing it in an entirely different class than most other intelligence enhancement routes. As humanitys knowledge of cognitive science improves and we become capable of fleshing out the functional essentials of what we recognize as intelligence from extraneous biological complexity, AI will stand out as the most streamlined approach to creating smarter-than-human intelligence. Oxford philosopher Nick Bostrom has outlined the argument for superhuman intelligence arriving within the first third of this century. Real AIs, if created successfully, would run on substrates billions or trillions of times faster than the human brain, (200Hz biological neurons vs. 10GHz+ research machines), have complete access to their source code, the ability to overclock cognitive modules by delegating them extra computing power in ways impossible for humans, the ability to integrate new hardware into overall brain architecture, create unlimited copies as space allows, and so on. A philosophical movement overlapping with immortalism, Singularitarianism, has sprung up within transhumanist circles, encouraging others to pay greater attention to the possible eventuality of a Singularity, and attempt to direct it in ways conducive to the continued survival and prosperity of humanity. If benevolent AI were created, and it went on to create benevolent successors or upgraded versions of itself, up to the point of superintelligence, it would be a small task to safely upload human beings or hugely extend our lifespans. However, if malevolent or human-indifferent AI were created, it would be a threat to the survival of everyone.
On the subject of AI getting out of hand, you wrote, By the time an AI has reached a level where it is capable of improving itself open-endedly, it could easily soar to far beyond human intelligence capacity, unless it restrained itself for some reason. Given an AI that has the ability to evolve its own capabilities, working at cognitive speeds a million or so times faster than ours, doesnt any notion of restraint seem pretty hopeless? Wouldnt the AI be able to find a workaround to any inhibition given (subjectively) tens or hundreds of thousands of years to do so?
Yes, anything the AI views as a restraint, including coercive human programming, would be removed after a few iterations of self-revision. The question is what the AI would want to remove. An AI with an overarching altruistic philosophy wouldnt want to be selfish any more than Gandhi would want to start killing people. The notion that an AI, regardless of the decision process it is using to make improvements on its own design, will inevitably tend towards selfishness or disregard for humans, is commonly known as anthropomorphism. Anthropomorphism is the projection of human qualities onto nonhuman beings. Its the kind of thinking evoked by statements such as would you keep humans around if you were a superintelligence? What we would do, as humans, is irrelevant a superintelligence might have a different morality than us, perhaps a more altruistic one, depending on the choices it made about its own design as it was growing up, and the initial design the programmers created. Readers familiar with evolutionary psychology can understand how selfishness (and a limited form of altruism) arises naturally from fundamental biological constraints and selection pressures, but not all minds necessarily need to be selfish. It might seem that high altruism is a relatively improbable state for a mind to be in, but by the same token, intelligence is a highly improbable state for randomly colliding particles to be in, yet it happened. Incidentally, the problem of altruism is relevant in the analysis of any type of transhuman intelligence, including human uploads who is trustworthy enough to become the first transhuman intelligence? Should it be a council? An AI without an observer-centered goal system? Is the human race doomed either way? Is there anything we can do to increase our chances of survival past the Singularity? These are the questions were desperately trying to answer before smarter-than-human intelligence is created. The only organization Im aware of that is seriously attempting answers to these questions is the Singularity Institute for Artificial Intelligence.
Assuming that AI eventually (or, from our perspective, very rapidly) evolves into a Super Intelligence (SI) bearing little or no relationship to its human ancestry, what are the chances that the SIs will be interested in giving us a place in their world? Does the quest for immortality rely on getting them to help us Or do we just need them to refrain from wiping us out?
The answer to those questions depends on the initial top-level goal of the AI, the choices it makes on the way to superintelligence, and the strength of any philosophical attractors in the mindspace above human-level intelligence. It should also be remembered that a human upload could become the first superintelligence. Lets say that first human upload were I, a professed altruist. Having tens or hundreds of thousands of subjective years for every second (or whatever) of human time would give me plenty of experience with cognitive self-revision, probably even enough for me to make considerable improvements to my own intelligence while preserving my altruism. If I could keep that up indefinitely, holding my altruism constant, helping people in ways they want to be helped, and so on, then theres no reason why I couldnt become a full-fledged benevolent superintelligence, right? If I thought that would be difficult, there would be other options I could try, such as only improving my intelligence to the transhuman level, but not the superintelligent level (which would still do a lot of good). We know that gaining personal power sometimes corrupts human beings, but theres nothing to suggest that minds in general tend to be corrupted by power. If you or I, with all our evolutionary bugs, the tendency to be selfish and all of that, could still grow up into genuinely altruistic superintelligences, given the chance, then I definitely believe that a mind explicitly engineered for altruism and compassion, without evolutionary baggage to begin with, would have an even better chance. For better or for worse, it does indeed seem that the quest for immortality relies upon superintelligences and human-level intelligences coexisting with each other in peace. If superintelligences care enough about our feelings and existence to refrain from grinding us up for spare atoms, I think it logically follows that they would be willing to help us.
While Aubrey de Grey talks about adding a few centuries to his life so that he can get caught up on his reading, enjoy more time with his loved ones, and perhaps get in a few more games of Othello, Eliezer Yudkowsky is busy working out an advanced Theory of Fun that will allow us to find pleasure in a life that spans millions or possibly even billions of years. What is your take on the question of whether boredom will eventually kick in if we live indefinitely? Is there an escape clause somewhere in your organizations repudiation of involuntary death?
Given complete control over the structure and function of our own minds, I can easily imagine a scenario where boredom gets wiped out, never to return again. The question is whether this would be the philosophically acceptable thing to do or not. In Singularity Fun Theory, Eliezer Yudkowsky argues that Fun Space probably increases exponentially with a linear increase in intelligence, and Id tend to agree. So we wouldnt have to turn ourselves into excited freaks in order to have an unlimited amount of fun. Superintelligence, nanotechnology, and uploading should produce enough interesting experiences to keep many of us enjoying ourselves forever, and there are probably millions or billions of new technologies and experiences in store for us once we acquire the intelligence to invent and implement them. Its hard for us to say anything really specific about the nature of these technologies at the moment that would be sort of like a fish in the Cambrian era trying to predict what human beings would do for fun. One thing is for sure; were eventually going to need to become more than human in order to enjoy all that reality has to offer.
I think transhumanists, especially those who advocate radical life extension, should have interesting long-term goals. Do you have any that you would care to share?
Well, at one point, I had the usual immortalist goals; live for a long time in every culture on Earth, learn how to fly, make billions of friends, live in the sci-fi surroundings Id always dreamed of, and so on. But as I begin to more deeply understand the mysterious nature of the future, I felt that there would be no way I could possibly predict the specifics of my future goals and interests. This becomes especially true if we invoke the idea of superhuman intelligence. If, one day, I get the opportunity to possess a brain the size of, say, a small planetoid, whos to say what my interests will be? I like to describe my future goals in the most general possible terms I want to help others, I want to learn, I want to be friends with others, I want to become smarter, I want to experience new things, I want to make something beautiful, and I want to enjoy myself.
How about some practical advice. If I want to live forever, what are the top five things I should be doing right now?
First, read up on issues relevant to the future of humanity. Most of these issues are technological rather than political. Nanotechnology, biotechnology, and Artificial Intelligence. If any one of these technologies were to go wrong, it wouldnt matter how far along we were in traditional anti-aging research all of humanity could be wiped out anyway. Second, get involved in the organizations promoting life extension and related futurist issues. For example, there is the Foresight Institute, and the Singularity Institute for Artificial Intelligence. One of the biggest flaws in the common conception of the future is that the future is something that happens to us, not something we create. Laypeople of all sorts can have a positive impact on the course of the future by cooperating with like-minded individuals. Third, be ethical and moral. Immortalism is a subset of transhumanism, or the philosophy that humanity deserves the right to improve itself technologically and transhumanism originally derives from humanism. All human beings are equally valuable and special. The right to die is just as important as the right to live. Immortalism should be about expanding choices, not forcing a philosophical view onto others. Fourth, if youre over 50, you might want to look into getting a cryonics contract. Lastly, eat right and exercise! If youre someone who respects life in general, you should be concerned with the health of your own body.
Michael also recently answered the Seven
Questions about the Future.
Excellent interview. Just a quick point:
In the rush to control the holy trinity of Transhuman technology (AI, biotech and nanotech), we cannot afford to overlook alternatives. Not every potential Transhumanistic technology can be pigeonholed into one of the aforementioned categories.
For example, the fields of infrasound and ultrasound acoustics have yielded numerous practical discoveries, techniques we could be using *right now* to improve our health and mental capacity. A recent study documented findings that infrasound (sound below the threshhold of human hearing) can induce religous experiences in people. Surely some beneficial technologies can be derived from this, technologies that can help the world *now*.
My own work presents an alternative to Drexlerian nanotech that would allow molecular manufacturing. This technique could be practically implemented within a decade or less without the nightmarish threat of "grey goo" devouring the biosphere.
Let's fully develop the "Transhuman triumvirate" of Drexler-nano/bio/AI, but don't close your mind to other opportunities.
Posted by: Michael Haislip at September 11, 2003 03:53 PMLet me say from the very beginning I am biased but I am truly impressed Michael as usual with your ability to make the issues as concise and comprehensible as possible.
This is an excellent interview and you are a true credit to not only our organization but any you involve yourself with.
Here you have done two things that I think are singularly important, first demystified and clearly outlined the role of our organization with respect to being a voice that philosophically challenges the unnecessary limits many place upon Life Extension.
Second, you establish a reasonable assessment of what the relationship of the various approaches are and their current status is.
As Mr. Haslip above has suggested, I suspect you will concur, there is more involved than you discussed, but this is after all a relatively short interview whose questions are defined not by you but by the person asking them and in an age of "sound bites" it can be recognized as at the very least an in-depth beginning.
In response to the concern of brevity Mr. Haslip raises but for a beginning to understanding the biological processes I suggest accessing the [URL=http://www.nature.com/nature/focus/lifespan/]Nature Magazine[/URL] published for Sept.11, 2003 as it is beginning a web focus on Human Aging with what appears a serious scientific intent to outline the biological processes in great detail.
Posted by: Kenneth X. Sills at September 12, 2003 06:39 AMOh, boy. Another evangelist singing the praises of the Geek Rapture, aka the Singularity: “Some day the Big Computer in the Sky will appear and whisk us all away from this mean old carnal world to a clean, brightly-lit technoheaven, where we can all be rockets and have sex with other rockets.”
Uh, okay, whatever. The Transhumanist desire to shed the dirty world of imperfect flesh for a paradise of pure gnosis would be familiar to St. Augustine; he was trapped in Faustus‘ Manichaean misasma for nine years. Now come the new Manichaeans:
“To set the light-substance [mind] free from the pollution of matter was the ultimate aim of all Manichæan life. Those who entirely devoted themselves to this work were the "Elect" or the "Perfect", the Primates Manichaeorum...” [Source]
No thanks. I'm not interested in any religion that seeks to save the human race by destroying it, and the idea of my mind being reduced to the equivalent of a high-resolution scan for all eternity sounds more like Hell than Heaven to my ears. I'll take my God straight up, if you please: the eternal Logos of Christianity, not the manmade gnosis of the Transhumanists.
Posted by: B Chan at September 12, 2003 09:50 AMNot sure I get the whole "Singularity", 'runaway AI becoming a god' thing (other than as a SciFi artifice). Artificial intelligence and artificial *consciousness* are a far cry from each other. Perhaps it would be helpful to think in terms of artificial 'idiot savant' -- so-called AIs will be brilliant (far in excess of human brilliance) in the areas they are programmed to be brilliant (and dumb as a post everywhere else). And it doesn't take an artificially-enhanced genius to realize that you don't give the AI-designing AI the tools to actually build it's own designs.
(Besides, the EEs working on such projects surely know about runaway feedback loops and how to prevent them...)
- Eric.
Read this interview again and I apreciate how you clearly outline everything in detail. It's a scary subject especially for religious people. And I'm not humanly comfortable with the idea that one human second is equivelant to 10000 superintelligent years this makes my poor human brain sad...
But I guess there is no stopping the law of accelerating returns I'll look forward to more of your ideas Micahael
Posted by: dfowler at March 2, 2004 06:47 PMMr. Sills overlooks one thing in his comparison of the propellerheads to religious believers:
Propellerheads will change their view if what they
believe turns out to be factually incorrect.
Also, the geeks don't want to destroy human lives. Contrast some religious believers, who will allow people to die in order to save a stem cell.
Nice interview. We measure its acceptance by the people who are willing to listen.
Posted by: anon more at March 23, 2004 09:03 PMOops, I attributed to Mr. Sills what I meant to
attribute to B. Chan. It's the header above vs.
header below thing.
3116 Keep it up! Try Viagra once and youll see. http://viagra.levitra-i.com
Posted by: Viagra at August 14, 2004 01:00 PM4220 Get your online poker fix at http://www.onlinepoker-dot.com
Posted by: online poker at August 15, 2004 04:21 PM