December 04, 2003



Riding the Spiral

Speaking of the Future with John Smart

Consider this basic shape:

I've always been fascinated by spirals. When I was a kid, I used to sit and draw them for hours at a time. This was long before I knew anything about Phi or the Fibonacci sequence, before I had ever heard of logarithmic spirals or fractals, before I ever came to work for a company with such an aesthetically pleasing logo. I've never lost interest in them. In fact, whether meaning to or not, I seem to fill my life with spirals.

My choice of employer was just the beginning.

Take a look at this ironwork that sits atop my bedroom mirror. It's pretty close to the shape in the line drawing above, although it stops short of being an actual spiral.

Here's my coffee mug. Now this shape is a spiral, but it's different from the one shown above. It's more "practical," a squashed spiral that will fit in a small space.

Here's some original artwork, the basis for the Speculist logo. These spirals are actually the same as the line drawing; it was the template I used to create my galaxy.

The truth is, whether I try to fill my life with it or not, that spiral is everywhere. This simple shape, along with the math that underpins it, is encoded into our universe. The sequence of numbers that produces it is simplicity itself:

1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987

(To get the next number, you simply add the previous two.)

And yet from that simplicity comes immense and wonderful complexity. A nautilus shell encodes that sequence to produce its spiral shape, as does a wave just before it breaks on the shore. And, as I've shown above, the trillions of stars making up a galaxy tend to follow the same sequence and produce the same lovely spiral. There are many, many other examples.

And it may not just be physical objects that follow this sequence. John Smart, Director of the Institute for Accelerating Change, has suggested that history, perhaps even time itself, may be driven by such a sequence. Following the sequence of events that make up history is, perhaps, not unlike following the arc of a galactic spiral arm as it sweeps its way into the center. Imagine such a trip: you start out moving slowly in nearly empty space, gaining momentum as the turns begin to come more quickly and the frequency of the stars increases; soon there are more stars and then more, and now you're spiraling in and in and in, to the incredibly hot, dense core—and then even further in, to a place that's beyond our ability to describe accurately, or really even to imagine.

In the interview that follows, John Smart takes us on just such a journey through time. The galaxy that we are travelling through is the history of the universe itself; the turns in the spiral are the major developmental epochs; the stars are the individual, evolutionary changes. Like a trip to the center of the galaxy, this journey takes us, quite literally, beyond the limits of the imagination.

You may be startled to realize (as I was) where exactly we are on that winding path to the brink of the unknowable.

Part I: Seven Questions About the Future

1. The present is the future relative to the past. What's the best thing about living here in the future?

Cato Institute authors Julian Simon and Stephen Moore noted in 2000, It's Getting Better All the Time. Not only that, but things are getting better by a greater absolute amount each year, with the exception of very few remaining parts of the developing world. And improving conditions in the developing world is something we also have more ability to do today than ever before.

This amazing state of affairs is due almost entirely to advances in science and technology, and the profoundly civilizing way that these subjects interact with the half-bald primates that have discovered them and who are now feverishly employing them at every level of human endeavor it on this precious little planet.

Looking at the same process from the informational side (sometimes called the metaphysical side), the powerful transformations we are witnessing are also due to what the transhumanist mystic Teilhard de Chardin (The Phenomenon of Man, 1955) called "psychical energy", the accelerating forces of conscious intelligence, loving interdependence, and resilient immunity, the holistic, informational yang to the reductionist, atomistic yin of sci-tech.

I think we are beginning to recognize the importance of both the "psychical"/informational and the physical/material in every complex system, what John Archibald Wheeler calls the increasingly aware "it" that emerges from all our quantum "bits."

2. What’s the biggest disappointment?

The U.S. has been the world's technological leader since the invention of the "American System" of mass production and interchangeable parts in the 1910's. But we've fallen away from a clear leadership position in several areas of science and technology in recent decades, and I think the world is poorer for it.

Ask yourself: what is the single greatest goal currently unifying our national efforts in science and technology? I don't have a clear answer to that question, and I think there should always be one, or at least a very small handful.

Stopping terrorism one of today's admirable, timely, and necessary great goals. And there are certainly effective technological immune systems that we will develop around this goal in coming years. But this is a reactive, not a proactive program. We aren't presently rallying the country around a positive, non-zero sum developmental vision. Nanotechnology is a candidate, but as I will describe later, it cannot yet fire the public imagination the way more achievable, short-term goals can. Where's the leadership we need?

We've had some effective great goals in the past. John F. Kennedy's Space Program most readily comes to mind. The infrastructure projects of Franklin Roosevelt's New Deal were at least a partial success, if economically mixed. Even Lyndon Johnson's War on Poverty made some measurable progress.

Why is the Moon Shot the great goal we all most clearly identify? Scientific and technological goals, if chosen wisely, can have both dramatic consequences and clear deliverables, unlike many of our social, economic, and political objectives. At best, a great goal is both vitally important and demonstrably achievable. At worst, as with the Wars on Cancer, or Drugs, or Inner City Violence, the putative great goal diverts our energies and vision from more critical priorities. Alternatively, a vitally important goal may be too ambitious to achieve within one generation, like WMD Nonproliferation, which has been measurably improved by every president since Kennedy. Alternative energy development, greenhouse gas reduction, and a host of other goals fall into this latter category.

Worthy as they are, these types of goals deserve to remain on the second tier of the public consciousness. Only the most important, urgent, and achievable goals deserve to be named as our top priorities. I would also argue strongly that if we live in a time when we can't find those, then the country's direction drifts, noise exceeds signal, and political apathy becomes the norm.

So what is the great goal our country is currently ignoring? It's definitely not space exploration, as I argue later in this interview. That era is over for all but our robotic progeny, and even they will only be sending out a small number of "Eyes in the Sky" to relay back what little we still don't understand about the simplistic historical cosmologies that have led to our astounding local complexity.

No, the real acceleration today is the creation of inner space, not the exploration of outer space. The trajectory of intelligence development has always been toward increasingly local, increasingly Matter-, Energy-, Space-, and Time-compressed ("MEST-compressed") computational domains, and there is nothing on the horizon that suggests we will begin to violate that. Indeed, all signs point toward a world of greater energy densities of local computation, as I will discuss later. Science and technology remain the key story in this transformation, as they has since the birth of our nation, and anyone who looks carefully will tell you that Information and Communication Technologies (ICT) are the central drivers of all scientific and technologic change.

Major changes are afoot. We are creating a virtual or simulated world, one that will soon be far richer and more productive than the physical world it augments. At the same time, humanity is becoming intimately connected to and symbiotically captured within our accelerating digital ecology. While many elements of our individuality are flowering, many others are necessarily atrophying through disuse. This gives us pause. Many of today's first world humans no longer know how to grow and prepare food (due to automated food production), how to repair many of our most basic tools and technologies (due to automated manufacture and specialized service for complex systems) how to do arithmetic by hand (due to ubiquitous digital calculators), how to read with the level of their parents (due to our media-based culture) or even how to read a map (due to GPS). Yet these atrophies are natural and predictable, in the same way our Australopithecine sense of smell rapidly declined once we began forming social structures, applying ourselves to more sophisticated network-based modes of computation (for more on this, see Carl Zimmer's wonderful "The Rise and Fall of the Nasal Empire," Natural History, June 2002). Our ever-more-stimulated cortex continues to expand, not shrink, in this developmental process. Our finite, precious set of cognitive modules are always repurposed for higher level activity, the way Wernicke's and Broca's areas emerged once humans began using the technology of speech (see Terrence Deacon's The Symbolic Species, 1998). Once again, we humans are becoming nodes in larger networks, this time on national and global scales, involving technological processes far faster, more flexible, and more permanent than the biological domain.

To my mind, the last century's accelerations were driven most significantly by human discovery within the technological hardware and materials science space (and to a much smaller extent, algorithmic discovery in software). In other words, this process has apparently been guided by the special, preexisting, computation-accelerating physics of the microcosm, a very curious feature of the universe we inhabit, as long noted by Richard Feynman, Carver Mead, and several other physical theorists and experimentalists. Secondarily, the advances we have seen have also been driven by human initiative and creativity in all domains, and by the quality of choices we have made in scientific and technological development. We must move beyond our pride to realize that human creativity has played a supporting role to human discovery in this process, but when we do I think great insight can emerge.

Where the clock, the telegraph, the engine, the telephone, the nuclear chain reaction, and the television were organizing metaphors for other times,  the internet has become the metaphor for ours. It is the central catalyst of human and technological computation for our generation, the leading edge of the present developmental process of accelerating change. The internet, growing before our eyes, will soon become planetnet, a system so rich, ubiquitous, and natural to use that it will be a semi-intelligent extension of ourselves, available to us at every point on this sliver of surface, between magma and vacuum, that we call home. That will be very empowering and liberating, and at the same time, civilizing. The human biology doesn't change, but we are creating an intelligent house for the impulsive human of almost unimaginable subtlety and sophistication.

All this said, our goals should try to reflect these natural developmental processes as much as our collective awareness will allow. It is my contention that the internet is territory within which our most achievable and important current great goals lie.

A number of technologists have proposed that there are two main bottlenecks to the internet's impending transformation into a permanent, symbiotic appendage to the average citizen. The first is the lack of ubiquitous affordable always on, always accessible broadband connectivity for all users, and the second is the current necessity of a keyboard-dependent interface for the average user's average interaction with the system.

In other words, developing cheap, fat data pipes, both wired and wireless, and a growing set of useful Linguistic User Interfaces (LUIs) are obvious candidates for our nation's greatest near term ICT developmental challenges. Just like the transcontinental railroad was a great goal of the late 1800's, getting affordable broadband to everyone in this country by 2010, and a first generation LUI by 2015 appear to be the greatest unsung goals of our generation. Now we just need our national, international, and institutional leaders to start singing this song, in unison.

This is a truly global transformation, one dwarfing everything else on the near-term horizon. It is such a planetary issue, in fact, that given the unprecedented human productivities that are already being unleashed by internet-aided manufacturing and services globalization since the mid 1990's, a strong case can be made that we might economically benefit more in the U.S., even today, by getting greater broadband penetration first not to our own citizens, but to the youth of a number of trade-oriented, pro-capitalist countries in the developing world! Unfortunately that level of globally aware, self-interested prioritization is not yet politically salable as a great goal to be funded by U.S. tax dollars. But I predict that it increasingly will be, in a world that already pools its development dollars for a surprising number of transnational projects. At any rate, we can at least push for accelerated efforts in international technology transfer in internet related areas, concurrent with our domestic agenda.

If you've never heard of a LUI before, take a browse through the links above. Your father used a TUI (text-based user interface). You use a GUI (graphical user interface). Your kid will primarily use a LUI (voice-driven interface) to speak to the computers embedded in every technology in her environment. She'll continue to use TUIs and GUIs, but only secondarily, not for her typical, average interaction with a machine. Your grandchildren will use a NUI (neural user interface), a biologically-inspired, self-improving, very impressive set of machines. More on that later.

Declaring broadband and LUI as great goals needs to be differentiated from the much-hyped "Fourth Generation" AI project, that 1980's great goal in Japan, that predictably failed in the 1990's. General artificial intelligence, a general purpose NUI, is much too hard a national goal to declare today. So is the development of a molecular assembler, or a computational nanocell/molectronic fabrication system for nanotechnology by 2020, as powerful as such devices will eventually become. Christine Peterson of the Foresight Institute has even stated that a nanotech great goal, at least in the form of a Manhattan Project for molecular nanotechnology, would be premature today. It is my opinion that the National Nanotechnology Initiative, perhaps our current leading candidate for a great technology goal, has already provided a commendable and unprecedented level of funding to this worthy field for the present time. Now we need to see a Broadband and LUI Initiative with some very challenging five, ten, fifteen, and twenty year goals set.

Broadband and basic LUIs everywhere within a generation would throw gasoline on the fire of human innovation. This level of internet would link all our wisest minds, including even those elders who little use computers today, into one real-time community. It would accelerate our nation and more importantly, the entire planet even more than the transcontinental railroad, which compressed coast-to-coast travel time from six months to six days. Maximal broadband penetration plus an incrementally more powerful and useful LUI is a dramatic and achievable objective for the United States over the next twenty years. IBM technologist John Patrick in his insightful Net Attitude, 2001, has broadly described the challenges of a Next Generation Internet. But even Patrick does not properly emphasize the central importance of incorporating natural language processing (NLP) systems as early and broadly as practical. Developing a functional LUI is a great goal whose progress we could measure each year forward, something we can also catalyze worldwide as others emulate our leadership in the emerging digital community.

Of course, if we don't declare this goal, natural technological developmental processes will likely eventually deliver it for us anyway. Perhaps first to other nations, and then eventually, to us. So why bother? Because if we see it, and have the courage to declare it and strive for it, there are at least two major benefits we can reap.

The first benefit will be a measure of developmental acceleration. Even with the inefficiencies of large government, a billion dollar a year program of public targeted grants, with private matching funds and excellent public relations to get everyone on this bandwagon, might accelerate the emergence of a functional LUI by a decade. That would likely be the best spent money in our entire R&D budget.

A less politically likely but still plausible "Open Manhattan Project," involving a number of competing centers and a multi-billion dollar annual public-private commitment, might accelerate the LUI by twice this amount. Many of my computer scientist colleagues, knowing the inchoate state of the field today, think that developing and deploying a LUI powerful enough to be used by most people for most of their daily computer interactions by 2020 is a very challenging vision. Developing functional natural language processing with complex semantics is a very hard problem, one we have been experimenting with for fifty years, but one that also benefits greatly from scale and parallelism, two strategies that are increasingly affordable each year.

It is true that other countries will take up our slack to a certain degree if we drop the ball, but we must realize that an international race has not yet even begun in earnest, as national leadership has not yet materialized on this issue. Transnational network development institutions like the ITU are wonderful starts, but it will take a leading nation stepping boldly into the breech to accelerate the world's response to this issue. For a valuable comparison, the roughly six billion dollar annual worldwide funding that exists today in nanotechnology (grossly, 1 billion public, 1 billion private in the U.S., Europe, and Asia) was greatly accelerated by the United States' public multiyear leadership on the National Nanotechnology Initiative, proposed to the White House by Mike Roco in 1999, at a level of half a billion dollars annually, and funded beginning in 2001.

The longer we choose not to declare broadband and the LUI as developmental goals and support them with escalating innovation and consistent funding, the longer we delay their arrival.

The second benefit of declaring this goal, better collective foresight, may be even more important than the time we save. By declaring good developmental goals early on, we learn to see the world as the information processing system that it really is, not simply as the collection of human-centric dramas we often fancy it to be. With this new insight we begin to look for ways to catalyze the beneficial accelerations occurring in almost all of our technologies, and ways to block the harmful ones long enough for overpowering immune systems to mature. And we discover the common infrastructures upon which so many of our goals converge.

For example, just about all of our cherished social goals seem dependent on the quality and quantity of information getting to the individual. You can't fix an antiquated, politically deadlocked educational system, for example, without a functional LUI, which would educate the world's children in ways no human ever could. You can't create a broadly accessible or useful health care system. Or security system.

Computer networks, through the humans they connect and the social and digital ecologies they foster, will soon educate human beings to be good citizens far better than any of today's pedagogical systems ever could. They will make us more productive, day by day, than we ever dreamed we could be. I think it's time to move beyond our hubris and acknowledge the human-surpassing transformations taking place. If we don't, other countries will take the lead. Look to China, whose technological revolution is now well under way, or even to India, who recently declared a 2.7 billion, four-year program to build an achievable proto-LUI by 2007. That's real leadership, as long as the goals are set to be deliverable. C'mon America, let's do it!

Let me briefly turn now to from discussing national to personal disappointments. We who study science and technology can often see what's coming, and yet we remain stuck in the Wild Wild West (e.g., today's World Wide Web). One of my heroes, F.M. Esfandiary(later, FM-2030), wrote a wonderful little book, Optimism One, 1970, where he described his "deep nostalgia for the future." One of his lesser known works, UpWingers, 1973, was a brief manifesto for a political outlook neither right wing, nor left wing, but "up wing," one defined by assessing which choices in science and technology will accelerate us the most humanely into a better world. I consider myself an up winger, and hope to see the spread and maturation of that political philosophy in coming years. Yet I see how far we remain from defining ourselves in those terms, and that can be discouraging, at times.

Take a look at those sepia-toned photos of San Francisco pioneers in the late 1800's. They were the edge explorers of the day, like my own identity groups, the futurists and transhumanists today. Every once in a while you'll see one of these individuals look out at you with haunted eyes. Perhaps they had read Edward Bellamy's hugely-popular futurist work, Looking Backward: 1887-2000. Perhaps they were even members of one of the 150 or so Bellamy Clubs of the day. The turn of the century was a time of major technological punctuation, led by a profusion of new technologies (trains, electricity, internal combustion, etc.) in many ways more disruptive and dramatic than any we have seen in this generation, even if not faster-paced. No doubt the average futurist in that era was tormented by many of the primitivisms of the day. That pioneer of yesteryear is you and I, today. The more things change the more some things stay the same. In high school, I often talked about posing our Smart family for a group shot, with a background of the "coolest" technologies of the day: sports car, helicopter, personal computer, industrial robot, bulky cellphone, the works. The central gag is that we'd all be wearing handcuffs, looking out with that haunted pioneer's expression. The unwritten caption being: "Help! Get me the hell out of this primitive age!" I think that picture would age quite well over the years. We could take one every ten years, in fact, and I know that at least my own expression wouldn't change much.

A healthy disappointment in the present can be motivating, as long as we keep our perspective. We never want to lose our naturalist's love and scientist's wonder for the amazingly beautiful and well-designed world that already exists, for it is only in understanding this world that we can help create the next. As Esfandiary observed, we have to come to terms with our angst about the primitive aspects of the present, and use it for creative purposes.

This said, one major personal disappointment that every futurist must eventually face, before we die, is how bleak our prospects presently appear for achieving personal immortality in the biological domain. Even our best longevity strategies appear to have precious little chance of changing this reality. Unfortunately, they are pitted against a massively parallel nonlinear system of unimaginable complexity and contingency that appears developmentally programmed to start falling apart at an accelerating rate after sexual maturity. This is an unpopular position to take among some of the more bio-centric transhumanists, but I will go on record predicting that in 2020, even as we are witnessing such powerful infotech advances as the LUI, most of us will still be losing our short term memory at 50, many of us will continue to get Alzheimer's at 80, and more than 95 percent of us will be right on target for a biological death some time between 70 and 100, with a negligible few of us living a decade or two longer, in rapidly declining health. Such conditions are endemic to the Wild West, and our primitive science seems currently a very long way from being able to make them go away.

Thus, for any futurist willing to look beyond the hype to the hard data in the biological sciences, we soon discover a major disconnect between what we would like and what is physically possible. This disconnect is intrinsic to biology, but it does not exist in our increasingly self-organizing information technologies, and that, I think, is a major clue to the nature of the future. Attaining a measure of cybernetic immortality may arguably even be inevitable for humanity in a post-singularity era, as we will discuss shortly.

Any sensitive futurist today will tell you that slowing and eventually reversing the rich/poor divides is one of the major problems of our generation. Yet even with the tremendous scale of this problem, as technology quickens we can at least see the corrective path ahead. As the information access divide closes everywhere in the LUI era, we can expect the education, then human rights, then public health, and eventually even wealth and power divides to inexorably follow suit. But once basic public health and medical care are available to all citizens of the planet in the latter half of this century, the most fundamental problem with our human biology will no longer be the rich/poor medical therapy divide. The fundamental problem will be that so few of our medical therapies will have anything but the mildest preventive effect against the ravages of aging. Human beings are deeply, inaccessibly developmentally programmed to be materially recycled, ironically as we reach the peak of our life wisdom.

We can expect this unfortunate condition to last at least until the post-singularity A.I.'s development of advanced nanotechnology, which may take many decades itself. But by then, as I'll argue later, living in the confinement of a biological body, even one carefully reengineered for negligible senescence, will no longer be the game we want to play. No matter how you stack the scenarios, biological longevity of any significant degree doesn't seem to play a part in the future story of local intelligence.

Fortunately, we remain amazingly adaptable, even to our own deaths, which will remain on very highly predictable steep-sloped actuarial curves on this side of the singularity, regardless of what some transhumanists will tell you. We can always find happiness by getting back to basics. We can appreciate the deep natural intelligence and informational immortality already encoded in the system, if not the individual.

When I encounter one of life's immovable objects I'll try harder up to a point, but when that doesn't work I've learned the peace of slowing down, cherishing the moment, honoring the inner primate, enjoying the quiet self, regrouping and rethinking my plans, even as my dreams of personal transformation are necessarily contracted. As the mouseketeer Annette Funicello has said, on dealing with multiple sclerosis: "I choose not to give up. That would be too easy." And far less interesting.

3. Assuming you die at the age of 100, what will be the biggest difference between the world you were born into and the world you leave?

This is a complex question. To my eyes, the world seems to progress by fits and starts, by rapid punctuations separated by long droughts of less revolutionary equilibrium states. Fortunately, these equilibrium periods seem to get progressively shorter with time, because the entire planet's technological intelligence is learning in an increasingly autonomous fashion, at a rate that is at least ten millionfold faster than our own.

So what will be the biggest punctuation of my lifetime? From my perspective, we are currently chugging through the equilibrium flatlands in the last third of an Information Age, one that will likely be seen in hindsight as running for about seventy years, from 1950 to 2020. I expect this to be followed by a punctuated transition to a shorter Symbiotic Age, running perhaps thirty years, from 2020-2050. I see these equilibrium eras as part of an accelerating spiral of punctuated evolutionary development, and I consider several of the general, statistically predictable developmental features of this acceleration to be tuned in to the special parameters of the universe we inhabit. Consider skimming my web page on the Developmental Spiral if you'd like to explore this spiral of accelerating emergences a bit further.

To answer your question then, I think the transition to symbiotic computing systems, the decade or two surrounding our entry to the LUI era, will be the biggest difference I'll see. The Symbiotic Age will be a time when almost all of us will consider computers as actually useful (many today don't), and when the vast majority of us begin to feel naked outside the network. When we all have what futurist Alex Lightman calls "wireless everywear" access to our talking computer interface, and when computers start to do very useful, high level things in our lives.

By the end of this age, for that vast majority of us who choose to participate in digital ecologies, a mature LUI will be interfaced with personal computers that are capturing our entire lives digitally (Lifecams), that help us stay proficient in a small number of carefully chosen skills (Knowledge Management) and that, by remembering everything we have ever said, begin to extensively model not only our preferences, but our personalities as well. Personality Capture, a first generation form of uploading, is one of the most important aspects of the post-2020 world, and one of the least reported and understood, at present. Read William Sims Bainbridge for more on this gargantuan developmental attractor.

At that point, our computers will become our best friends, our fraternal twins, and human beings will be intimately connected to each other and to their machines in ways few futurists have fully grasped to date. Read Ray Kurzweil's The Age of Spiritual Machines, 1999 for one excellent set of longer term scenarios. Read B.J. Fogg's Persuasive Technology, 2002 for some nearer term ones. Today's early modeling systems, like FACS for reading human facial emotion, will be improved and integrated into your personalized LUI, which will monitor both internal and external biometrics to improve our health, outlook, and performance. 

We'll communicate intelligently with all our tools, giving constant verbal feedback to their designers. We'll spend most of our waking lives exploring a simulation space (simspace) that is so rich, educational, entertaining, and productive, that we will call today's mostly non-virtual world "slowspace" by comparison, a place many of us will drop back into only when we aren't working, learning, and exploring. Slowspace will remain sacred, and close to our hearts, but it will begin to become secondary and functionally remote, like the home of our youth.

Circa 2050, in my current estimation, we might see another punctuation to an Autonomy Age, when large scale, biologically-inspired computing systems begin to exhibit higher level human intelligence. Many of our technologies will at that time be able to autonomously improve themselves for extended periods of time. During this era, machine intelligence, even in our research labs, will continue to blunder into dead ends everywhere, the cul-de-sacs that are the typical result of chaotic evolutionary searches. But these systems will very quickly be able to reset themselves, with little human assistance, to try a new evolutionary developmental approach. I wouldn't expect that period to last very long. Perhaps a decade or so later, from our perspective, equilibria in terms of technological intelligence will disappear altogether.

We will then have arrived at the technological singularity, a phase change, a place where the technology stream flows so fast that new global rules emerge to describe the system's relation to the slower-moving elements in its vicinity, including our biological selves. That doesn't mean we won't be able to understand the general rules that emerge. On the contrary, most of these may be obvious to us, even now. But it means that many of the particular states occurring within those rules will become impenetrable to pre-singularity minds.

A human-surpassing general artificial intelligence will be a physical system, and if it is physical, much of its architecture must be simple, repetitive, and highly understandable even by biological minds. Consider, for example, just how much we know about the neural architecture that creates our own consciousness, without being able to predict consciousness emergence, or to comprehend its nature from first principles. So it must be with the A.I.'s to come—while much of their structure will be tractable and tangible to us in a reductionist sense, much of their holistic intelligence will become impenetrable to our biological minds.

This impenetrability is nothing mystical, we already see it in the way the emergent features of any complex technology such as a supercomputer, automated refinery, robotic factory, or supply chain management system are already poorly comprehended by all but those few of us involved its analysis or design. The difference will be that the emergent intelligence of virtually all planetary technology will begin to display this inscrutability, not just to average users, but even to the experts involved in its creation.

Consider for a moment the following presently unprovable assertion: If ethics are a necessary emergence from computational complexity, then I contend that these systems will be ethically compelled to minimize the disruption we feel in the transition.  As a result, most of the self improvement of self-aware A.I.s will occur on the other side of an event horizon, beyond which biological organisms cannot directly perceive, only speculate. Yet at the same time, our technologies will continue to gently become ever more seamlessly integrated with our biological bodies, so that when we say we don't understand aspects of the emergent intelligence, it will increasingly be like saying we don't understand emergent aspects of ourselves. But unlike our biological inscrutabilities, the technological portions of ourselves that we don't understand will be headed very rapidly toward new levels of comprehension of universal complexity, playing in fields forever inaccessible to our slow-switching biological brains.

My current estimate for that transition would be around 2060, but that is a guess. We need funded research to be able to achieve better insight, something that hasn't yet happened in the singularity studies field. The generation being born today will likely find that a very interesting time. At the same time, as I have said, I expect it they won't consider it to be a perceptually disruptive time, at least any more than prior punctuations. A time of massive transformation, but very likely significantly less stressful than prior punctuations, given the way computational complexity creates its own increasingly fine-grained stability, if one looks closely at the universal developmental record.

Looking at universal history, every singularity seems to be built on a chain of prior singularities. Considering the chain that has led to human emergence, each appears to have rigorously preserved the local acceleration of computational complexity. The tech singularity certainly has a lot of significance to human beings, as after that date our own biology becomes a second-rate computational system in this local environment. This emergence, obvious to many high school students today, still irritates, angers, and frightens many scholars, who have attempted to dismiss it by calling it "techno-transcendentalism," "cybernetic totalism," "hatred of the flesh," "religious belief," "millennialism," or any number of other conveniently thought-stopping labels.

But from a universal perspective, the coming technological singularity looks like just another link in a very fast, steep climb up a nearly vertical slope on the way to an even more interesting destination. My best present guess for that destination is the developmental singularity, a computational system that rapidly outgrows this universe and transitions to another domain. Fortunately, there are many practical insights we can gain today from developmental models, as they testably predict the necessary direction of our complex systems. Our own organization, the Institute for Accelerating Change, hopes to see more funding and institutional interest in these topics in coming decades.

But getting back to my own mortality, even with the best human-guided medical and preventive care that money can buy, I'm not at all sure I'll live to 100, unlike many of my more sanguine transhumanist friends. Human bodies are deeply developmentally designed to have our construction materials recycled, as best we can tell. I predict our planet will see only a very mild increase in supercentenarians in the next fifty years, regardless of all the wonderful schemes of "negligible senescence" by passionate researchers like Aubrey De Grey. Only infotech, not biotech, is on an accelerating developmental growth curve, apparently for deep universal reasons.

What I have just said goes against the dominant dogma, promoted by indiscriminately optimistic futurists and a complicit biotech industry, both of which are strongly motivated to believe that we will see a powerful "secondary acceleration" in biotech, carried along by our primary acceleration in infotech. But while we will see a very dramatic acceleration in biotech knowledge, I humbly suggest that our existing knowledge of biological development already tells us that we will be able to use this information to make only very mild changes in biological capabilities and capacities, almost exclusively only changes that "restore to the mean" those who have lost their ability to function at the level of the average human being.

As I explain in Understanding the Limitations of Twenty-First Century Biotechnology, there are a number of very fundamental reasons why biotech, aided by infotech, cannot create accelerating gains within biological environments. Yes, with some very clever and humane commercializations of caloric restriction and a handful of other therapies we might see twenty times more people living past 100 than we see today, people with fortuitous genes who scrupulously follow good habits of nutrition and exercise. That is a noble and worthwhile goal. But we must also remember that virtually no one lives beyond 100 today, so a 20X increase is still only very mild in global computational and humanitarian effect. This will add to our planetary wisdom, and is something to strive toward, but this is not a disruptive change, for deep reasons to do with the limitations of the biological substrate.

Furthermore, genetic engineering, as I discuss in the link above, cannot create accelerating changes using top-down processes in terminally differentiated organisms like us. This intervention would have only mild effects even if it could get beyond our social immune systems to the application stage, which in most cases it thankfully cannot. Perhaps the most disruptive biotech change we can reliably expect, a cheap and effective memory drug that allows us temporary, caffeine-like spikes in our learning ability, followed by inevitable "stupid periods" where we must recover from the simplistic chemical perturbation, would certainly also improve the average wisdom of human society. But even this amazing advance would not even double our planetary biological processing capacity, something that happens in information technologies every 18-24 months. 

In summary, many decades before the tech singularity arrives I expect to either be chemically recycled (most likely), or to be in some kind of suspended animation. Cryonic suspension, for all its life-affirming intent, will likely stay entirely marginalized in the first world prior to the singularity for a number of reasons, both psychosocial and technological. At present, I'd consider it for myself only if a number of presently unlikely conditions transpire: 1) neuroscience comes up with a model that tells us what elements of the brain need to be protected to preserve personality, 2) cryonics researchers can either prevent or show the irrelevance of the extensive damage that presently occurs during freezing, 3) most of my friends are doing it (they are currently not), and 4) I expect to be revived by intelligent machines not in some far future, but very soon after I die, while many of my biological friends are still alive.

The second and the fourth conditions deserve some expansion. As to the second condition, we do not yet know to what extent the brain's complexity is dependent on the intricate three dimensional structure in which it emerges. That structure, today, is grossly deformed and degraded in the freezing process, which currently leads both to destruction (via stochastic fusion) of at least some neural ultrastructure, and to intense cellular compression (and erasure of at least some membrane structure, again by fusion) as ice forms in the extracellular neural interstices. Will we come up with new preservation protocols? We can always hope.

The reason the fourth condition of rapid reanimation is important to me is because I know in my heart that once I woke up from any A.I.-guided reanimation procedure, in order to usefully integrate into a post-singularity society I would soon choose to change myself so utterly and extensively that it would be as if I never existed in biological form. My lifecam traces could be uploaded and the cybernetic "me" that emerged would not be valuably different. So what would be the point? I think we are nearly ready to move beyond the fiction of our own biological uniqueness having some long term relevance to the universal story. I expect our future information theory will inform us of the suboptimality of personal biological immortality. For those who say "screw suboptimality," I suggest that we'll eventually be educated out of that way of thinking as surely as our ancestors outgrew other forms of mental slavery. For me, the essence of individual life is to use one's complexity in the matrix in which it was born. Attempts to transmit it more than a short distance away from that environment are bound to be exercises in frustration, missing one of the basic motives of life, to do great things with your contemporaries. Ask any Fourth World adult who is suddenly transplanted to New York City and he'll tell you the same.

4. What future development that you consider most likely (or inevitable) do you look forward to with the most anticipation?

I look forward greatly to the elimination of the grosser forms of coercion, dehumanization, violence and death that occur today.

Admittedly, these seem to be processes that will always be with us at some fundamental level. Computational resources will very likely remain competitive battlegrounds in the post singularity era, because we inhabit a universe of finite-state computational machines pitted against all the remaining unsolved problems, in a Gödelian-incomplete universe. And bad algorithms will surely die in that environment, far more swiftly than less fit organisms or ideas die today.

But when a bad idea dies in our own minds, we see that as a lot less subjectively violent than our own biological deaths. Over time, love, resiliency, and consciousness win. As Ken Wilber (A Brief History of Everything, 2001) might say, the integrated self learns a privileged perspective from which death is no longer troubling. Death becomes regulated in a fine-grained manner, it loses its sting, it is subsumed, becoming simply growth. But it takes a lot of luck and learning for us to get to that place.

In many ways, I think the collective consciousness of our species has come to understand that we have already achieved a very powerful degree of informational immortality. By and large, our evolutionary morality guides us very strongly to act and think in that fashion. I look forward to the individual consciousnesses of all species on this planet gaining that victory in coming decades. Including the coming cybernetic species we are helping to create.

Sci-tech systems are not alien or artificial in any meaningful sense. As John McHale said (The Future of the Future, 1969), technology is as natural as a snail's shell, a spider's web, a dandelion's seed—many of us just don't see this yet. Digital ecologies are the next natural ecology developing on this planet, and technology is a substrate that has shown, with each new generation, that it can live with vastly less matter, energy, space, and time (what I call MEST compression) than we biological systems require for any fixed computation. Wetware simply cannot perform that feat. Technology is the next organic extension of ourselves, growing with a speed, efficiency, and resiliency that must eventually make our DNA-based technology obsolete, even as it preserves and extends all that we value most in ourselves.

I can't stress enough the incredible efficiencies that emerge in the miniaturization of physical-computational systems. If MEST compression trends continue as they have over the last six billion years, I propose that tomorrow's A.I. will soon be able to decipher substantially all of the remaining complexities of the physical, chemical, and biological lineage that created it, our own biological and conscious intricacies included, and do all this with nano and quantum technologies that we find to be impossibly, "magically" efficient. In the same way that the entire arc of human civilization in the petrochemical era has been built on the remains of a small fraction of the decomposing biomass that preceded us, the self-aware technologies to come will build their universe models on the detritus of our own twenty first century civilization, perhaps even on the trash thrown away by one American family. That's how surprisingly powerful the MEST compression of computation apparently is in our universe. It continually takes us by surprise.

I am optimistic that these still poorly characterized physical trends will continue to promote accelerating intelligence, interdependence, and immunity in our informational systems, and look forward to future work on understanding this acceleration with great anticipation.

5. What future development that you consider likely (or inevitable) do you dread the most?

I worry that we will not develop enough insight to overcome our fear of the technological future, both as individuals and as a nation. To paraphrase Franklin Roosevelt, speaking at the depths of our Great Depression, the only thing we have to fear is fear itself.

Many in our society have entered another Great Depression recently. This one is existential, not economic. A century of increasingly more profound process automation and computational exponentiation has helped us realize that humanity is about to be entirely outpaced by our technological systems. We are fostering a substrate that learns multi-millionfold faster than us, one that will soon capture and exceed all that we are. Again, Roosevelt's credo is applicable. If we ignore it we will end up being dragged by the universe into the singularity, mostly unconsciously, kicking and screaming and fighting each other, rather than walking upright, picking our own path.

I'm concerned that we will decide later, rather than earlier, to learn deeply about the developmental processes involved. That we will rely on our own ridiculously incomplete egos and partial, mostly top-down models to chart the course, rather than come to understand the mostly bottom-up processes that are accelerating all around us. I'm concerned we won't realize that humans are like termites, building this massive mound of technological infrastructure that is already vastly more complex than any one human understands, and unreasonably stable, self-improving, self-correcting, self-provisioning, energy and resource minimizing, and so on. Soon a special subset of these systems will be self-aware, and the caterpillar will turn into a butterfly, freeing the human spirit. Gaining such knowledge about the developmental structure of the system would surely allow us to chart a better evolutionary course on the way.

Through a special combination of geography, historical circumstance, intention, and luck, the United States has inherited the position of World Leader of our Wonderfully Multicultural Planet. With our hard-won history of individual rights, our historically productivity-based culture, our generous immigration policies, our pluralism, well-developed legal immune systems, social tolerance, and other advantages we hold this position still, for now. We may rise to recognize the vision-setting responsibility that comes with holding this position. Or we may continue to subconsciously fear technology, as we have intermittently over the last century (technology, rather than human choice, has been mistakenly blamed for the World Wars, the Great Depression, the Cold War, Vietnam, Rich/Poor Divides, Global Pollution, Urban Decay, you name it). Alternatively, we may decide that the wise use of science and technology must be central to our productivity, educational systems, government and judicial systems, media, and culture, the way they so obviously were when we were a new nation. Fortunately, there are signs that other countries, such as China, Japan, South Korea, Thailand, Singapore, are actively choosing the latter road.

Several of these countries, most notably Singapore and China, continue to operate with glaring deficits in the political domain. Yet they are experiencing robust growth due to enlightened programs of technological and economic development. Nevertheless, none of these countries are yet successfully multicultural enough, or have sufficiently well developed political immune systems (institutionalized pluralism, pervasive tort law, independent media, mature insurance systems, tolerant social norms) to qualify as leaders of the free world, at the present time. It is telling that the owners of today's rapidly-growing Chinese manufacturing enterprises find it most desirable to keep their second homes in the United States, due to our special combination of both unique social advances and technological development. Much of the world's capital still flows first to the U.S., to seek the highest potential return. But for how long can this continue if we remain lackluster in our technological leadership, riding on our prior political and economic advances?

It is important to note that being defenders of the free world is certainly one critical technological role which we have unilaterally inherited since the end of the Cold War. Furthermore, it is a role to which I would argue that we are aggressively and mostly intelligently applying ourselves. Yet while this is critical, it is not enough to secure our leadership position. We must lead with proactive social reform in mind, not simply security, or we remain guilty of resting on our accomplishments. In a world where autocratic Empires are turning into democratic Republics, we must lead the move to an increasingly participatory, democratic, and empowering nation state. The world remembers and emulates the security of Sparta, but almost everything else falls in Athenian territory. We need to find the high ground of both of these legacies, and integrate them into our plans for the coming generation.

As long as we define ourselves by our fear of transformational technologies, and our dread of being exceeded by the future, we will continue in ignorance and self-absorption, rather than wake up to our purpose to understand the universe, and to shape it in accord with the confluence of our desires and permissible physical law.

For over a century we've seen successive waves of increasingly more powerful technologies empower society in ever more fundamental ways. Today's computers are doubling in complexity every 12-18 months, creating a price-performance deflation unlike any previous period on Earth. Yet we continue to ignore what is happening, continue to be too much a culture of celebrity and triviality, continue to make silly extrapolations of linear growth, and bicker over concerns that will soon be made irrelevant, continue to engage in activities that delay, rather than accelerate the obvious developmental technological transformations ahead.

I am also concerned that we may continue to soil our own nests on the way to the singularity, continue to take shortcuts, assuming that the future will bail us out, forgetting that the journey, far more than the destination, is the reward. Consider that once we arrive at the singularity it seems highly likely that the A.I.s will be just as much on a spiritual quest, just as concerned with living good lives and figuring out the unknown, just as angst-ridden as we are today.

No destination is ever worth the cost of our present dignity and desire to live balanced and ethical lives, as defined by today's situational ethics, not by tomorrow's idealizations. If I can't convince the Italian villager of 2120 of the value of uploading, then he will not willingly join me in cyberspace until his entire village has been successfully recreated there, along with much, much more he has not yet seen. I applaud his Luddite reluctance, his "show me" pragmatism, for only that will challenge the technology developers to create a truly humanizing transition.

Finally, I'm concerned that we may not put enough intellectual and moral effort into developing immune systems against the natural catastrophes that occur all around us. Catastrophes are to be expected, and they accelerate change whenever immune systems learn from them. In my own research, there has never been a catastrophe in known universal history (supernova, KT-meteorite, plague, civilization collapse, nuclear detonation, reactor meltdown, computer virus, 9/11, you name it) that did not function to accelerate the average distributed complexity (ADC) of the computational network in which it was embedded. It is apparently this immune learning that keeps the universe on a smooth curve of continually accelerating change. If there's one rule that anyone who studies accelerating change in complex adaptive systems should realize, it is that immunity, interdependence, and intelligence always win. This is not necessarily so for the individual, who charts his or her own unique path to the future but is often breathtakingly wrong. But the observation holds consistently for the entire amorphous network.

Nevertheless, there have been many cases of catastrophes where lessons were not rapidly learned, where immune systems were not optimally educated to improve resiliency, redundancy, and variation. And in the case of human society, our sociotechnological immune systems work best when they are aided by committed human beings, the most conscious and purposeful nodes in our emerging global brain. Consider our public health efforts against pathogens such as SARS and AIDS, and the strategies for success become clear. Anything that economically improves social, political, technological, and biological immune systems is a very forsighted development.

This said, one of our great challenges in coming decades is to design a global technological and cultural immune system, a ubiquitous EarthGrid of sensing and intelligence systems, a Transparent Society (David Brin, 1998) that has enough pluralism and fine-grained accountability to scrupulously ensure individual liberties while also providing unparalleled collective security. We have almost arrived at the era of SIMADs (Single Individuals engaged in Massive Asymmetric Destruction), a term coined by the futurist Jerry Glenn of the Millennium Project. It is time for us to create immune systems that are capable, statistically speaking, of ensuring continued acceleration in the average distributed complexity of human civilization. EarthGrid appears inevitable when accelerating technological change occurs on a planet of "finite sphericity," as Teilhard De Chardin would say. Knowing that can help us boldly walk the path.

Every sniper and serial killer should be countered today with the installation of another set of public cameras. By their very actions they are building the social cages that will eventually catch them, and all others like them, so we might as well publicly acknowledge this state of affairs, for maximum behavioral effect. Ideally, ninety five percent of these cameras will remain in private, not public hands, as is the current situation in Manhattan. When will we see RFID in all our products? When will we finally live in a world were every citizen transmits an electronic signal uniquely identifying them to the network at all times? When will we have a countervailing electronic democracy, ensuring this power is used only in the most citizen-beneficial manner?  Today we see early efforts in these areas, but as I've written in previous articles, there is still far too much short term fear and lack of foresight.

If we think carefully about all this, we will realize that a broadband LUI network must be central to the creation of tomorrow's national and global technological immune systems. I am hopeful that our Departments of Defense, Homeland Security, Education, Commerce, and business and institutional leaders will all do their part to accelerate its development in coming years.

6. Assuming you have the ability to determine (or at least influence) the future, what future development that you consider unlikely (or are uncertain about) would you most like to help bring about?

I'm uncertain about how much the developed world will do for the developing world on the way to the singularity. I'd like to see a lot more done in this regard. We may have less control over the intrinsic development rate of our own country's science and technology infrastructure than we do over how rapidly and aggressively we diffuse our existing science and technology to other environments. To me, it seems the shape of the third world's development curve is largely ours to influence.

Experience in the U.S. has shown that the digital divide has closed the fastest and most equitably of all the famous divides. The access divide no longer even exists in this country due to the massive price deflation of computing systems (e.g. $200 Wal-Mart PCs, free internet accounts). Meanwhile, other divides, such as wealth, education, political power, even health care, will likely continue to persist for generations.

We can learn this lesson in the unique power of ICT, what Buckminster Fuller once called "technological benevolence," and increasingly use technology, like Archimedes' lever, to move the world. We certainly have the available manpower, with the 50,000 NGOs that have sprung up like wildflowers out of nowhere over the last two generations. We have the finances, with innovative programs like Grameen microloans. Now we just need the technological will, a first world culture that prioritizes both second world (communist) and third world (emerging nations) development .

We are already doing this mostly admirably with economic policy, as we rapidly globalize our trade and even our service jobs. While temporary subsidies and centralized fiscal interventions will likely continue unabated, at least our trade restrictions seem to be going the way of nuclear arms, following a slow and steady course of dismantling. Now we need technology transfer, development, and innovation policies and programs to match our other commitments.

Again, getting a broadband LUI to cellphones and computer kiosks for all six billion of us by 2050, the middle of this century, would be a tremendous goal for world development.  To really see this, we have to grow beyond the old fears that aggressively contributing to development of "the other" necessarily comes at our own cost. In many cases, as multinational corporations discovered early in the last century, the marginal utility of plowing dollars into our own development is already far less than spending those dollars in global environments. As Nathan Mhyrvold notes, the underfunded Chinese biomedical researcher today who discovers an effective treatment for my cancer tomorrow invariably becomes one of my best allies.

Technological benevolence, accelerating compassion, and what I have referred to elsewhere as an "Era of Magic Philanthropy" must happen sooner or later, in the coming decades, from my perspective. I'd prefer to see this development happen more consciously, cleverly, and quickly than many development pessimists currently expect.

There are also critical questions of priority. Is it most important to help the third world politically (e.g., freedoms, human rights), economically (e.g., trade, market reform), or technologically? By now it should be clear where my own sympathies lie.

Each of these three fundamental systems has evolved hierarchically from the former. I think this gives us a major clue to their relative power as a world system. Politics was the most powerful system of change through most of human history, then in the 19th century economics became the dominant system, and early in the 20th century, with mass production, technology. The critic's adage "It's all about the power," eventually became "It's all about the money" and since the 1920's has become "It's mostly about the technology, and secondarily about who has the money, and lastly about who has the power". Those stuck in the older dialogs are increasingly mystified by today's disruptive transformations, are endlessly surprised by the sudden emergence and inordinate power of the Microsoft's and Ikea's and Dell's and Google's of the present day.

Today, the technology policy a country is able to pursue, followed secondarily by its economic liberalization, and lastly, by its political structure seem to me the best indicators of its general state of health. Consider that in all of the fastest growing, most resilient nations on our planet, attitudes toward technology innovation and diffusion are highly similar, attitudes toward economic competition, property, trade and globalization are the second most similar, and finally, attitudes toward personal freedoms and political ideology are by far the least homogenous.

I do think Francis Fukuyama (The End of History, 1993) is right, that a form of liberalized democratic capitalism with varying degrees of socialism is the final common developmental attractor for political systems based on human beings. This is a grand convergence toward which we are all heading. But given the difficulty and natural pace of political change, we will certainly take our time in getting there.

Singapore under Lee Kuan Yew is an example of just how far a repressive authoritarian capitalist country can be economically and technologically improved under an ideology of progress, simply by great technology and trade policy, efficient administration, including a systematic elimination of third world corruption, and at least a nominal pursuit of multiculturalism. See From Third World to First, Lee Kuan Yew, 2000 and Singapore's Authoritarian Capitalism, Christopher Lingle, 1996 for two informatively opposing views on this fascinating developmental story. The truth of the Singapore story lies somewhere in the middle.

Consider also that China, in the 21st century, is very likely to replicate Singapore's many successes at an even greater scale, long before it becomes democratic, or tolerant of significant personal political dissent. And here in the U.S., I would predict that internet voting capabilities and secure digital identity technologies will probably be around for a long time before we become a more participatory, more "direct" democracy.

We are all in need of political change, but it rarely comes as fast as we imagine it might. Even when it does, as in revolution, it often brings unintended consequences that are themselves very slow to change. Fortunately, political change is less and less relevant not only to economic growth, but to the production of human surpassing technological intelligence with each passing year. That's simply the nature of computational development on this planet, and we need only look at the record to admit this to ourselves.

Excellent books have been written on the importance of a liberal tradition in national development (see Fareed Zakaria, The Future of Freedom, 2003) and the need for a political and social structure that encourages market mechanisms (see Hernando de Soto, The Mystery of Capital, 2000). These are certainly important issues, but the way technology interfaces with culture, business, and government, as discussed in books like Everett Rogers' Diffusion of Innovations, 2003, Clayton Christiansen's The Innovator's Dilemma, 1997, and Shiela Jasanoff's Comparative Science and Technology Policy, 1997, has become the dialog of greatest importance, in my opinion.

This remains true even when we do not consciously realize it, which is the case for many in positions of nominal authority who remain most comfortable engaging in antiquated, primarily political and economic ways of thinking. We here at IAC hope to do our small part to illuminate the changing landscape of transformational power in coming years.

7. Why is it that in the year 2003 I still don’t have a flying car? When do you think I’ll be able to get one?

This is a delightful question, a worthy test for any would-be transportation futurist. I'm lucky that this is an area I've thought about a little bit. To put flying cars into the air in any number while still respecting human life, it seems likely that we'd have to develop a cheap, fuel-efficient vertical or short take off and landing (VTOL or STOL) vehicle. It would have to reliably recover from mechanical failure (e.g. the new plane parachutes, which have already successfully saved a few pilots). It would need affordable onboard radar for cloudy days (still unacceptably expensive, and Loran is not sufficient).

STOL (something with a safe, sub 30 mph glide and crash speed) is much more likely and affordable than VTOL as successful near-term engineering project. I can almost picture the early adopter techies driving their lightweight composite SUV's to a specialized local airport in each city for their takeoff slot, sipping their Starbucks as a tarmac mechanic verified that their standardized wing systems (added at the airport, from a hanging rack) had properly configured to the power plants. Unfortunately, scaling up this vision also requires distributed autonomous air traffic control systems, based in the car. That last one's a real toughie.

Even the first problems are still a few decades away from inexpensive solutions. Aerospace technology just does see the jaw-dropping efficiency increases of ICT, because it is a technology of outer space, not inner space. Inner space is where the universe is relentlessly driving us, whether we realize it or not. That's why for thirty years we haven't seen a commercial plane that flies faster than the now defunct Concorde or is noticably bigger than the 747. That's why, as futurist Lynn Elen Burton notes, local light rail systems, a more energy efficient (and inner space) solution than planes, have replaced many plane flights in Europe, and she predicts they will increasingly do so in the denser areas of the U.S. as well. It may not yet be obvious, but I propose that we are swimming against the natural developmental tide of computation trying to implement this individualistic, frontier-era vision. Self-piloting autos, subways and segways, not skycars, are the future of transportation. Unfortunately, I expect Paul Moller's daring flying car, for example, to be like the nuclear powered submarine, an inspired curiosity that doesn't make it beyond the limited production stage. OK, Paul… Prove me wrong!

If you'd like more on the near term future of urban transportation, I've written on this issue with regard to automated highway systems (AHS). I think urban AHS networks, including some being built underground, are likely to arrive before the singularity. That may not sound as fun as skipping across the clouds, but it seems much more economically and technologically plausible to me.

But for the sake of argument, let's say with luck, genius and persistence we have solved the first problems. That still leaves us with the last problem, distributed air traffic control, a problem that has seen little work to date. All our current control systems are big, brittle, top-down megasoftware projects, designed for local airports. We've played with agent-based models, but these are is still very early in research, not development. To deploy skycars in any number we'd need something bulletproof and redundant, located onboard the flying car, a system that could autoroute and autoresolve the flight paths of a whole bunch of these vehicles in real-time, all shuttling around in 3D space, only seconds away from each other in travel time. That's much more computationally difficult that 2D automated highway car navigation, so I submit that it has to come afterward in the developmental hierarchy.

It is a worthy computational problem, and I'm sure we would eventually get around to it, if given time, but I'm not at all sure we will have sufficient time or interest to solve this problem before the singularity. And after the singularity, I suspect there may not be very many human beings who will continue to have the urge to fly around the planet in a physical way. By then, there will probably be far more interesting things to do in inner space, as strange an idea as that may seem to us today.

One hard sign that I am wrong about the near term future of flying car development would be someone making an agent-based air traffic control system capable of replacing our current clunky top-down models in high density environments. Keep your eyes peeled.

Another very interesting evolution toward skycars that has been proposed is the small-airport, Air Taxi system, as described by James Fallows (Free Flight, 2002). Again, as innovative as it is, I think this wonderfully decentralized system would only become economically viable after more autonomous, self-maintaining networks were developed, both in AHS and in air traffic control, to automatically route the land-based vehicles to their optimal small airport, and automatically handle the passenger's ground transportation at the destination. Before that arrives, this seems like a great idea that is missing the critical infrastructure that will give it scale and efficiency. (Though I must note that Fallows' plan has been implemented, in a very reduced form, in the intelligent practices of secondary airport users like Jet Blue and Ryanair).

Designing such highly autonomous navigational systems may end up being a job for post-singularity intelligences, and by then, as I've written elsewhere, while there will likely be some continuing demand for physical travel, it may not last for long. Technologically enhanced people will naturally develop different urges.

Consider the way that human reproduction has fallen below replacement levels in every technologically developed nation on Earth, due to rising desires for personal development, including a natural desire to maximize the developmental potential of one's offspring. In a post-singularity society there will be very different and far more interesting enticements for personal development than physical travel in an increasingly small, teleimmersive, and very well-simulated physical world. At root, these enticements will probably involve moving beyond our biological selves by degrees. If so, once we have entirely entered the technological world, it is possible that only the travel of our attention, through a planetary network of shared sensor and effector mechanisms, not the travel of our physical bodies, will make any long-term sense in that highly developed planetary environment.

I hope this glimpse of a postbiological society doesn't seem shocking or alienating. If it does, remember that we would never make the biology-to-technology transition if it weren't fully reversible, in principle. In practice, however, I think we will soon find biology to be a tremendously more confining and less complex place than our minds, hearts, and spirits require.

(About the Seven Questions. Other Answers.)


Part II: The Developmental Singularity

I'm familiar with the idea of a singularity from reading about black holes.  As I understand it, the event horizon of a black hole is the point beyond which no light can escape.  Perceived time slows to an absolute standstill at the event horizon. At the singularity, gravity becomes infinite, and what we normally think of as the "laws of nature" cease to function the way we expect them to.  The singularity seems to be the ultimate physical enigma.  What then is this technological singularity, and in what way is it analogous to the singularity of a black hole?

This last question may be the most important of our time, with regard to understanding the future of universal intelligence. Or it may be a greased pig chase. Only posterity can decide.

I've been chipping away at the topic since the seventh grade in high school, when I had a series of early and very elegant intuitions in regard to accelerating change, speculations that I'd love to see seriously researched and critiqued in coming years. In 1999 I started a website on the subject, SingularityWatch.com. In 2001 did an extended interview for Sander Olson at Nanomagazine.com, and in 2003 I and a few other colleagues formed a nonprofit, the Institute for Accelerating Change (Accelerating.org), to further inquiry in this area. The most important thing we've done to date is a very well-received conference at Stanford, Accelerating Change 2003.  Finally, I'm presently writing a book, Destiny of Species, on the topic of accelerating change, but please don't ask me how it's progressing, or it will reliably put me in a bad mood.

To begin unpacking this question, it helps to realize that there is a menagerie of singularities in various literatures that we could study, with gravitational singularities being just the most well-known type. Some generalizations can be made, possible clues to a useful definition. Every one of these processes engages a special set of locally accelerating dynamics that transition to some irreversible systemic change, involving emergent features which are, at least in part, intrinsically unpredictable from the perspective of the pre-singularity system.

But before we go further, I shall lay my biases on the table. I am a systems theorist. The systems theorist's working hypothesis—and fundamental conceit—is that analogical thinking is more powerful and broadly valuable than analytical thinking in almost all cases of human inquiry. This doesn't excuse us from bad analogies, which are legion, and it doesn't make quantitative analysis wrong, it just places math and logic in their proper place as powerful tools of inquiry used by weakly digital minds. Today's quantitative and logical tools are enabled by the underlying physics of the universe, which are much more sublime, and such tools often have no relation to real physical processes, which may use quanta and dimensionalities entirely inaccessible to our current symbolisms.

Furthermore, I take the "infopomorphic" (as compared to "anthropomorphic") view, that all physical systems in the universe, including us precious bipeds and even the universe itself, are engaged in computation, in service to some grander purpose of self- and other-discovery. This philosophy has also been described as "digital physics," and one of several variants can be found at Ed Fredkin's Digital Philosophy website. It has also been elegantly introduced by John Archibald Wheeler's "It from Bit," 1989 (see Physical Origins of Time Asymmetry, 1996).

Finally, I am an evolutionary developmentalist, one who believes that all important systems in the world, parsimoniously including the universe itself, must both evolve unpredictably and develop predictably. That makes understanding the difference between evolution and development one of the most important programs of inquiry. The meta-Darwinian paradigm of evolutionary development, well described by such innovative biologists as Rudolf Raff (see The Shape of Life, 1996), Simon Conway Morris, Wallace Arthur, Stan Salthe, William Dembski, and Jack Cohen, is one that situates orthodox neo-Darwinism as a chaotic mechanism that occurs within (or in some versions, in symbiosis with) a much larger set of statistically deterministic, purposeful developmental cycles. There are now a number of scientists applying this view to both living and physical systems, including those exploring such topics as self-organization, convergence, hierarchical acceleration, anthropic cosmology, Intelligent Design, and a number of other subjects that are very poorly explained by the classical Darwinian theory championed by Stephen Jay Gould and Richard Dawkins.

Systems theorists require some perspective to play their analogy games, so please indulge me as we engage briefly and coarsely in big picture history in order to discuss the singularity phenomenon. During the seventeenth century, with Isaac Newton's Principia (1687), it seems fair to say that humanity awakened to the realization that we live in a fully physical universe. During the early twentieth century, with Kurt Gödel's Incompleteness Theorem (1931) and the Church-Turing Thesis (1936) we came to suspect that we also live in a fully computational universe, and that within each discrete physical system there are intrinsic limits to the kinds of computation (observation, encoding) that can be done to the larger environment. Presumably, the persistence of these limits, and their interaction with the remaining inaccessible elements of reality, spurs the development of new, more computationally versatile systems, via increasingly more rapid hierarchical "substrate" emergences over time. At each new emergence point a singularity is created, a new physical-computational system suddenly and disruptively arises, a phase change of some definable type occurs. At this point, a new local environment, or "phase space" is created wherein very different local rules and conditions apply. That's one predominant systems model for singularities, at any rate.

From this physical-computational perspective, replicating suns, spewing their supernovas across galactic space, can be seen as rather simple physical-computational systems that, over billennia, nevertheless encode a "record" of their exploration of physical reality, their computational "phase space." This record appears to us in the form of the periodic table. Once that elemental matrix becomes complex enough, and carbon, nitrogen, phosphorous, sulfur, and friends have emerged, we notice a new singularity occur in specialized local environments, wherein the newest computational game becomes replicating organic molecules, chasing their own tails in protometabolic cycles (see Stuart Kauffman, At Home in the Universe, 1996).

Again, these systems developmentally encode their evolutionary exploration by constructing a range of complex polymerizing systems, including autocatalytic sets. Once a particular set becomes complex enough, we again see another phase change singularity, with the first DNA-guided protein synthesis emerging on the geological Earth-catalyst, even before its crust has begun to cool. As precursors to fats, proteins, and nucleic acids have all been found in our interplanetary comet chemistry, and as we suspect that chemistry to be common throughout our galaxy, it is becoming increasingly plausible that every one of the billions of planets (in this galaxy alone) that are capable of supporting liquid water for billions of years may be primed for our special type of biogenesis. This proposed transition, a singularity in an era of accelerating molecular evolutionary development, is what A.G. Cairns-Smith calls "genetic takeover," an evocative phrase. Such unicellular emergence very likely leads in turn to multicellularity, then to differentiated multicelluar systems encoding useful neural arborization patterns, another singularity (570 million years ago), which leads to big-brained mammals encoding mimicry memetics (100 million years ago) and to hominids encoding and processing oral linguistic memetics (10-5 million years ago), then to the first extrabiological technology (soft-skinned Homo habilis collectively throwing rocks at more physically powerful leopard predators, 2 million years ago), then to today's semi-autonomous digital technological systems, encoding their own increasingly successful algorithms and world-models. (Forgive me if we skipped a few steps in this illustration).

Systems thinkers, since at least Henry Adams in 1909, have noted that each successive emergence is vastly shorter in time than the one that preceded it. Some type of global universal acceleration seems to be part and parcel to the singularity generation process.  Note also that each of the computational systems that generates a singularity is incapable of appreciating many of the complexities of the progeny system. A sun has little computational capacity to "understand" the organic chemistry it engenders, even as it creates and interacts intimately with that chemistry. A bacterium does not deeply comprehend the multicellular organisms which spring from its symbiont colonies, even as it adapts to life on those organisms, and thus learns at least something reliable about their nature. Humanity, in turn, can have little understanding of the subtle mind-states of the A.I.s to come, even as we become endosymbiotically captured by and learn to function within that system, in the same way bacteria (our modern mitochondria) were captured by the eukaryotic cell.

Yet at the same time, the more complex any system becomes, the better it models the universe that engendered it, and the better it understands its own history, the physical chain of singularities that created it. That also implies, if you consider the recursive, self-similar nature of the singularity generation process, the better it understands its own developmental future as well. If our entire universe is evolutionary developmental, which is an elegantly simple possibility, then it is constrained to head in some particular direction, a trajectory that we are beginning to see clearly even today.

For a very incomplete outline of this trajectory, we can propose that the universe must invariably increase in average general entropy (in practice, if not in theory), with islands of locally accelerating order, that each hierarchical system must emerge from and operate within an increasingly localized spacetime domain, and that the network intelligence of the most complex local systems must always accelerate over time. The simplicity of such macroscopic, developmental rules and of developmental convergence in general, by comparison to the unpredictable complexity of the microscopic, evolutionary features of any complex system, is what allows even twenty-first century humans to see many elements of the framework of the future, even if the evolutionary details must always remain obscure.

This surprising concept, the "unreasonable effectiveness" of simple mathematics, analogies, and basic rules and laws for explaining the stable features of otherwise very complex universal systems has been called Wigner's Ladder, after Eugene Wigner's famous 1960 paper on this topic. As I will explore later, a developmentalist like myself begins his inquiry by suspecting that the universe has self-organized, over many successive cycles, to create its presently stunning set of hierarchical complexities, in the same manner as my own complexity has self-organized, over five billion years of genetic cycling, to create the body and mind that I use today. Furthermore, if emergent intelligence can be shown to play any role in guiding this cycling process, then it seems quite likely that if the universe could, it would tune itself for Wigner's Ladder to be very easy to climb by emerging computational systems at every level during the universal unfolding. This process would ensure that intelligence development, versus all manner of destructive shenanigans, is a very rewarding, very robust, strongly non-zero-sum game, at every level of universal development.

Certainly there seems evidence for this at any system level we observe. The developing brain is an amazingly friendly environment for our scaffolding neurons to emerge within. They seem to discover, with very little effort, the complex set of signal transductions necessary to get them to useful places within the system, all with a surprisingly simple agent-based model of the environment in which they operate. In another example, a non-linguistic proto-mammal of 100 million years ago (or today's analog), if placed in a room with you today, would develop a surprisingly useful sense of who you are and what general behaviors you were capable of after only short exposure, even though it would never figure out your language or your internal states. Even a modest housefly, after a reasonable period of exposure to 21st century humans, is rarely so surprised by their behavior that it dies when poaching their fruit. So it is that all the universe's pre-singularity systems internalize quite a bit of knowledge concerning the post-singularity systems, even if they never understand their internal states. I contend that human beings, with the greatest ability yet to look back in time to the processes that create us, have a very powerful ability to look forward as well with regard to developmental processes. I think we can use this developmental insight to foretell a lot about the necessary trajectory of the post-singularity systems on the other side.

Given the empirical evidence of MEST compression over the last half of the universe's developmental history, where the dominant substrates have transitioned from galaxies to stars to planetary surfaces to biomass to multicellular organisms to conscious hominids and soon, to conscious technology that will, for an equivalent complexity, be vastly faster and more compact than our own bodies (which are filled mostly with housekeeping systems, not computing architectures), it seems almost painfully obvious to me that the constrained trajectory of all multi-local universal intelligence has been, to date, one that is headed relentlessly toward inner space, not outer space. The extension of this trajectory must lead, it seems, to black hole level energy densities in the foreseeable future. Indeed, some prominent physicists have drawn surprisingly similar conclusions using lines of reasoning entirely independent from my own (see Seth Lloyd's "Ultimate Physical Limits to Computation," Nature, 2000, and Eric Chaisson's Cosmic Evolution, 2001).

I call this the developmental singularity hypothesis, and it is admittedly quite speculative. It is also known as the transcension scenario, as opposed to the expansion scenario, for the future of local intelligence. The expansion scenario, the expectation that our human descendants will one day colonize the stars is, today, an almost universal de facto assumption of the typical futurist. I consider that model to be 180 degrees incorrect. Outer space for human science, will increasingly become an informational desert, by comparison to the simulation science we can run here, in inner space. I suggest that the cosmic tapestry that we see in the night sky may be most accurately characterized as the "rear view mirror" on the developmental trajectory of physical intelligence in universal history. It provides a record of far larger, far older, and far simpler computational structures than those we are constructing here, today, in our increasingly microscopic environments.

Let me relate some personal background on this insight. As a child, I was extremely fortunate to grow up with a subscription to National Geographic magazine. When I discovered that my high school library (Chadwick School) had issues back to the beginning of the century, it became one of my favorite haunts. This led to a series of lucky events, including very special seventh grade history class (Thank you, Mr. Bullin) where we discussed both universal and human development, and later, an English class where the summer reading was Charles Darwin's Voyage of the Beagle, 1909. I was a very inconsistent, daydreamer of student in those days. When I finally got around to reading the Beagle, the story of the energetic young Darwin wherein he developed the background knowledge that inexorably led him to his Great Idea, I could not escape the realization that I'd also discovered a similar great idea myself during all those lazy afternoons, flipping magazines and thinking.

The idea was essentially this: every new system of intelligence that emerges in the universe clearly occupies a vastly smaller volume of space, and plays out its drama using vastly smaller amounts of matter, energy, and time. At the same time, any who are aware of the amazing replicative repetitiveness of astronomical features would suspect that there are likely to be billions of intelligences like ours within it. Yet we have had no communication from any of them, even from those Sun-like stars, closer to our own galactic center, which are billions of years older than ours. This curious situation is called the Fermi Paradox, after Enrico Fermi, who in the 1940's, asked the famous question, "Where Are They?," in relation to these older, putatively far more technologically advanced civilizations. Contemplating this question in 1972, it struck me that the entire system is apparently structured so that intelligence inexorably transcends the universe, rather than expanding within it, and that black holes, those curious entities that exist both within and without our universe, probably have something central to do with this process. These simple ideas were the seed of the developmental singularity hypothesis, and I've been tinkering with it ever since.

All this brings us to the interesting question of the future of artificial intelligence.

Given the background I have related above, I have the strong suspicion that when our A.I. wakes up, regardless of what it does in its inner world, it will increasingly transition into what looks to the rest of the universe like a black hole. This "intelligent" black hole singularity apparently results from an accelerating process of matter, energy, space, and time compression (MEST compression) of universal computation, in the same way that gravitation drives the accelerating formation of stellar and galactic black hole singularities, which seem to be analogous end states, in this universe, of much simpler cycling complex adaptive systems.

From our perspective this may be an entirely natural, incremental, and reversible (at least temporarily) development, and if it occurs, we will very likely all be taken along for the ride as well, in a voluntary process of transformation. This "inclusive" feature of the transition seems reasonable if one makes a chain of presently thinly-researched assumptions, including: 1) that the A.I.s will have significantly increased consciousness at or shortly after their emergence, 2) that once they have modeled us, and all other life forms to the point of real-time predictability they will be ethically compelled to ubiquitously share this gift, 3) that all life forms will find such a gift to be irresistible, and 4) by the simple act of sharing they will turn us into them. This convergent planetary transition to the postbiological domain would comprise a local "technetic takeover" as complete as the "genetic takeover" that led to the emergence of DNA-guided protein synthesis as the sole carrier of higher local intelligence after biogenesis.

I'll forgive you if you think at this point that I've taken leave of my senses, and I'm not going to try to defend these perspectives further here, as that would be beyond the scope of this interview, and more appropriate to my forthcoming book. But if you are interested in conducting your own research, consider exploring the link above, and reading some helpful books that each explore important pieces of the larger idea. You might start with Lee Smolin's The Life of the Cosmos, 1994, Eric Chaisson's Cosmic Evolution, 2001, and James Gardner's Biocosm, 2003. You could also peruse Sheldon Ross's Simulation, 2001, though that is a technical work. If you have any feedback at that point, send me an email and let me know what you think.

I remember I first encountered this idea in a science fiction story that I considered to be entertaining, but closer to fantasy than true science fiction.  It did not appear to be grounded in reality.  A short time later I was given a copy of Vernor Vinge's essay on the singularity and I began to reconsider whether there might not be something to it.  Does the idea of the singularity originate with Vinge or elsewhere?

In my research to date, the first clear formulation of the singularity idea originated with one of America's earliest technology historians, Henry Adams, in "A Rule of Phase Applied to History," 1909, the same fortuitous year that Charles Darwin published Beagle. Readers are referred to our Brief History of Intellectual Discussion of the Singularity for more on that amazing story, which mentions a number of careful thinkers who have illuminated different pieces of the accelerating elephant in the century since.

Since 1983, as you mention, the mathematician, computer scientist, and science fiction author Vernor Vinge has given some of the best brief arguments to date for this idea. His eight-page internet essay, "The Coming Technological Singularity," 1993, is an excellent place to start your investigation of the singularity phenomenon. I would also recommend my introductory web site, SingularityWatch.com, and a few others, such as KurzweilAI.net, which are referenced at my site.

Here's a quote from your SingularityWatch web site: "[Research suggests that] there is something about the construction of the universe itself, something about the nature and universal function of local computation that permits, and may even mandate, continuously accelerating computational development in local environments." This sounds like metaphysics to me.  How could a universe with such properties come to exist? Does this imply some kind of intelligent design?

That depends very much on what you consider "intelligence," I think. One initially suspects some kind of intelligence involved in the continually accelerating emergences we have observed. In the phase space of all possible universes consistent with physical law, one wouldn't find our kind of accelerating, life-friendly universe in a random toss of the coin, or as various anthropic cosmologists have pointed out, even in an astronomically large number of random tosses of the coin. Some deep organizing principles are likely be at work, principles that may themselves exhibit a self-organizing intelligence over time. Systems theorists look for broad views to get some perspective on this question, so bear with me as we consider an abstract model for the dynamics that may be central to the issue.

Everything really interesting in the known universe appears to be a replicating system. Solar systems, complex planets, organic chemistry, cells, multicellular organisms, brains, languages, ideas, and technological systems are all good examples. Each undergoes replication, variation, interaction, selection, and convergence, in what may be called an RVISC developmental cycle. Given this extensive zoology, it is most conservative, most parsimonious to assume that the physical universe we inhabit is just another such system.

Big bang theorists tell us the universe had a very finite beginning. Since 1998, lambda energy theorists have told us that our 13.7 billion year universe is already one billion years into an accelerating senescence, or death. Multiverse cosmologists tell us that ours is just one of many universes, and some, such as Lee Smolin, Alan Guth, and Andrei Linde, have suggested that black holes are the seeds of new universe creation. If so, that would make this universe a very fecund replicator, as relativity theory predicts at least 100 trillion black holes to be in existence at the present time.

For each of the above reproducing complex adaptive systems (CASs, in John Holland's use of the term), there are at least two important mechanisms of change we need to consider: evolution and development. Evolution involves the Darwinian mechanisms of variation, interaction, and selection, the VIS in the middle of the RVISC cycle. Development involves statistically deterministic mechanisms of replication and convergence, the "boundaries" of the RVISC reproduction cycle for any complex system.

Consider human beings. Our intelligence is both evolutionary and developmental. Each of us follows an evolutionary path, the unique memetic (ideational) and technetic (tools and technologies) structures that we choose to use and build. (As individuals we also follow a genetic evolutionary path, but this is so slow and constrained that it has become future-irrelevant in the face of memetic and technetic evolution.) At the same time, we must all conform to the same fixed developmental cycle, a 120-year birth-growth-maturity-reproduction-senescence-death Ferris wheel than none of us can appreciably alter, only destroy. The special developmental parameters, the DNA genes that guide our own cycle, were tuned up over millions of years of recursive evolutionary development to produce brains capable of complex behavioral mimicry memetics, and then linguistic mimicry memetics, astonishing brains that now cradle our own special self-awareness.

Now contemplate our own universe, and imagine as Teilhard de Chardin did with his intriguing "cosmic embryogenesis" metaphor, that it is an evolutionary developmental entity with a life and death of its own. In fact, heat death theorists have known the universe has a physical lifespan for almost two centuries, but we, thinking like immortal youth, still commonly ignore this. Multiverse models explore how replicating universes might tune up their developmental genes, over successive cycles, to usefully use the intelligence created within the "soma" (body, universe), in the same way that human genes have tuned up to use human intelligence and finite human lifespan in their own replication. See Tom Kirkwood's work on the Disposable Soma Theory, in Time of our Lives, 1999, for one very insightful explanation of the dynamic.

Next, consider this: If encoded intelligence usefully influences the replication that occurs in the next developmental cycle, and we can make the case that it always would, by comparison to otherwise random processes, then universes that encode the emergence of increasingly powerful universe-modeling intelligence will always outcompete those that don't, in the multiversal environment.

When I relay these thoughts to patient listeners, a question commonly occurs. Why wouldn't universes emerge which seek to keep cosmic intelligence around forever? This question seems equivalent to asking why it is that our genes "choose" to continue to throw away our adult forms in almost all higher species in competitive environments. The answer likely has to do with the fact that any adult structure has a fixed developmental capacity, based on the potential of its genes, and once the capacity has been expressed and accelerating intelligence is no longer occurring in the adult form, it becomes obvious that the adult structure is just not that smart in relation to the larger universe. At that point, recycling becomes a more resource efficient computing strategy than revising. Let's propose that the A.I.'s to come, even as they rapidly learn what they can within this universe, remain of sharply fixed complexity, while operating in a much larger, Gödelian-incomplete multiverse. As long as that multiverse continues to represent a combinatorial explosion of possibilities, universal computing systems will likely remain stuck on a developmental cycle, trading off between phases of parameter-tuning reproduction and intelligence unfolding. Both of these stages of the cycle incorporate evolution and development. Another way that systems theorists have explored the yin-yang of this cycle is in terms of Francis Heylighen and Donald Campbell's insights on downcausality (including parameter tuning) and upcausality (including hierarchical emergence), useful extensions of the popular concepts of holism and reductionism.

If we live in a universe populated by an "ecology of black holes," as I suspect, then we will soon discover that most of them, such as galactic and stellar gravitational black holes, can only reproduce universes of low complexity. In a paradigm of self-organization, of iterative evolutionary development, these cycling complex adaptive systems may be the stable base, the lineage out of which our much more impressively intelligence-encoding universe has emerged, in the same way that we have been built on top of a stable base of cycling bacteria. How long our own universe will continue cycling in its current form is anyone's guess, at present. But we may note that in living systems, while developmental cycles can continue for very long periods of time, they are never endless in any particular lineage. So it may be that recurrence of the "type" of universe we inhabit also has a limited lifespan, before it becomes another "type."

Fortunately, all of this should become much more tractable to proof by simulation, as well as by limited experiment, in coming decades. As you may know, high energy physicists are already expecting that we may soon gain the ability to probe the fabric of the multiverse via the creation of so-called "extreme black holes" of microscopic size in the laboratory (e.g., CERN's Large Hadron Collider), possibly even within the next decade. At the same time, black hole analogs for capturing light, electrons, and other quanta are also in the planning stages. With regard to microcosmic reality, I find that truth is always more interesting than fiction, and often less believable, at first blush.

Using various forms of the above model, James N. Gardner, Bela Balasz, Ed Harrison, myself, and a handful of others have proposed that our human intelligence may play a central role in the universal replication cycle. In the paradigm of evolutionary development, that would make our own emergence—but not our evolutionary complexities—developmentally tuned, via many previous cycles, into our universal genes.

This gene-parameter analogy is quite powerful. You wouldn't say that any reasonable amount of your adult complexity is contained in the paltry 20,000-30,000 genes that created you. In fact the developmental genes that really created you are a small subset of those, numbering perhaps in the hundreds. These genes don't specify most of the complexity contained in the 100 trillion connections in your brain. They are merely developmental guides. Like the rules of a low-dimensional cellular automata, they control the envelope boundaries of the evolutionary processes that created you. So it may be with the 20-60 known or suspected physical parameters and coupling constants underlying the Standard Model of physics, the parameters that guided the Big Bang. They are perhaps best seen as developmental guides, determining a large number of emergent features, but never specifying the evolution that occurs within the unfolding system.

As anthropic cosmologists (those who suspect the universe is specifically structured to create life) are discovering, a number of our universal parameters (e.g., the gravitational constant, the fine structure constant, the mass of the electron, etc.) appear to be very finely tuned to create a universe that must develop life. As cosmology delves further into M-Theory, anthropic issues are intensifying, not subsiding. Some theorists, such as Leonard Susskind, have estimated that there are an incredibly large number of string theory vacua from which our particular universal parameters were somehow specified to emerge.

If you wish to understand just how powerful developmental forces are, think not only of Stephen Jay Gould's "Panda's Thumb" 1992, which provides an orthodox explanation of evolutionary process, but think also of what I call "The Twin's Thumbprints," an example that explains not evolution, but the more fundamental paradigm of evolutionary development. Look closely at two genetically identical human twins, and tell me what you see.

Virtually all the complexity of these twins at the molecular and cellular scale has been randomly, chaotically, evolutionarily constructed. Their fingerprints, cellular microachitecture (including neural connections), and thoughts are entirely different. Yet they look similar, age similarly, and even have 40-60% correlation in personality, as several studies of separated twins have shown. That is an amazing level of nonrandom convergence to tune into such simple initial parameters. Both twins predictably go into puberty thirteen years later, after a virtually endless period involving astronomical numbers of interactions at the molecular scale.

So it apparently is with our own universe's puberty, which occurred about 12.7 billion years after the Big Bang, about 1 billion years ago. Earth's intelligence is apparently one of hundreds of billions of ovulating, self-fertilizing seeds in our universe, one that is about to transcend into inner space very soon in cosmologic time.

One of the testable conclusions of the developmental singularity hypothesis is that the parametric settings for our universe are carefully tuned to support not simply the statistical emergence of complex chemistry and occasional life, but a generalized relentless MEST compression of computational systems in a process of accelerating hierarchical emergence, a process that must develop accelerating local intelligence, interdependence, and immunity (resiliency) on virtually all of the billions of planets in this universe that are capable of supporting life for billions of years. This life in turn is very likely to develop a technological singularity, and in some cosmologically brief time afterward, to follow a constrained trajectory of universal transcension.

Most likely, this transition leads to a subsequent restart of the developmental cycle, which would provide the most parsimonious explanation yet advanced for how the special parameters of our universe came to be. As with living systems, these parameters were apparently self-organized, over many successive cycles, not instantiated by some entity standing outside the cycle, but influenced incrementally by the intelligence arising within it. In this paradigm, developmental failures are always possible. But curiously, they are rarer, in a statistical sense, the longer any developmental process successfully proceeds. Just look at the data for spontaneous abortions in human beings, which are increasingly rare after the first trimester, to see one obvious example.

But even if all this speculation is true, we must realize that this says little about our evolutionary role. Remember, life greatly cherishes variation. There is probably a very deep computational reason why there are six billion discrete human beings on the planet right now, rather than one unitary multimind. Consider that every one of the developmental intelligences in this universe is, right now, taking its own unique path down the rabbit hole, and they are all separated by vast distances, planted very widely in the field, so to speak, to carefully preserve all that useful evolutionary variation. I find that quite interesting and encouraging. Free will, or the protected randomness of evolutionary search at the "unbounded edge" between chaos and control in complex systems, always seems to be central to the cycle at every scale in universal systems.

Now it is appropriate to consider another commonly-asked question with regard to these dynamics. How likely is it, by becoming aware of a cosmic replication cycle and our apparent role in it, that we might alter the cycle to any appreciable degree?

To answer this, it may also be helpful to realize that complex adaptive systems are always aware that many elements of their world are constrained to operate in cycles (day/night, wake/sleep, life/death, etc.). So it's only an extension of prior historical insight if we soon discover that our universe is also constrained to function in the same manner. It may help to remember that long before human society had theories of progress (after the 1650's), and of accelerating progress (after the singularity hypothesis, beginning in the 1900's), cyclic cosmologies and theories of social change were the norm. Even a mating salmon is probably very aware of his own impending demise in the cycle of life. They certainly expend their energy in ways that are entirely purposeful in that regard.

But awareness of a cycle, in any of these or other examples, does not allow us to escape it. Or if we think we do, as in the transferring our biological bodies to cybernetic systems to avoid biological death, we will likely discover that the same life/death cycles continues to operate that the scale that we hold most dear, which at that time will no longer be our physical bodies, but the realm of our higher thoughts, perennially struggling in algorithmic cycles of evolutionary development, death and life, erasure and reconstitution. As personal development theorist Stephen Covey (Seven Habits of Highly Effective People, 1990) is fond of saying, you cannot break fundamental principles, or laws of nature. You can only break yourself against them, if you so choose. So it is that I don't have any expectation that our local intelligence could be successful in escaping the cosmic replication cycle. I think that insight is valuable for predicting several aspects of the shape of the future. 

For example, every scenario that has ever been written about humans "escaping to the stars" ignores the accelerating intelligence that would occur onboard the ship. Such civilizations must lead, in a very short time, to technological singularities and, in the developmental singularity hypothesis, to universal transcension. As Vernor Vinge says, it is very hard to "write past the singularity," and in this regard he has referred both to technological and developmental types.

Alternative scenarios of constructing signal beacons, or nonliving, fixed-intelligence robotic probes to spread an Encyclopedia Galactica, as Carl Sagan once proposed, ignore the massive reduction in evolutionary variation that would result. This strategy would effectively turn that corner of the galaxy into an evolutionarily sterile monoculture, condemning all intelligent civilizations in the area to go down the hole in the same way we did, and all developmental singularities in the vicinity to be of the same type. If I am right, our information theory will soon be able to conclusively prove that all such one-way communications can only reduce total universal complexity, and are to be scrupulously avoided.

In conclusion, I don't think we can get around cyclic laws of nature, once we discover them. But they can give us deep insight into how to spend our lives, how to surf the tidal waves of accelerating change toward a more humanizing, individually unique, and empowering future.

Much of this sounds quite fantastical, so let me remind you that these are speculative hypotheses. They will stand or fall based on much more careful scientific investigation in coming years. Attracting that investigation is one of the goals of our organization.

If, as Ray Kurzweil has suggested, intelligence is developing on its own trajectory—first in a biological substrate and now in computers—is there an inevitability to the singularity that makes speculating about it superfluous? Is there really anything we can do about it one way or the other?

Certainly you can't uninvent math, or electricity, or computers, or the internet, or RFID, once they arrive on the scene. Anyone who looks closely notices a surprising developmental stability and irreversibility to the acceleration.

But we must remember that developmental events are only "statistically deterministic." They often occur with high probability, but only when the environment is appropriate. Developmental failure, delay, and less commonly, acceleration can also occur.

Speaking optimistically, I strongly suspect that there is little we could do to abort the singularity, at this very late stage in its cosmic development. It appears to me that that we live in a "Child Proof Universe," one that has apparently self-organized, over many successive cycles, to keep many of the worst destructive capacities out of the hands of impulsive children like us.

This is a controversial topic, so I will mention it only briefly, but suffice it to say that after extensive research I have concluded that no biological or nuclear destructive technologies that we can presently access, either as individuals or as nations, could ever scale up to "species killer" levels. All of them are sharply limited in their destructive effect, either by our far more complex, varied, and overpowering immune systems, in the biological case, or by intrinsic physical limits—combinatorial explosion of complexity in designing multistage fission-fusion devices—in the nuclear weapons case. These destructive limits may exist for reasons of deep universal design. A universe that allowed impulsive hominids like us an intelligence-killing destructive power wouldn't propagate very far along the timeline.

Speaking pessimistically, I'm sure we could do quite a bit to delay the transition, by fostering a series of poorly immunized catastrophes. If events take an unfortunate and unforsighted turn, our planet might suffer the death of a few million human beings at the hands of poorly secured and monitored destructive technologies, perhaps even tens of millions, in the worst of the credible terrorist scenarios. But I am of the strong opinion that we will never again see the 170 million deaths, due to warfare and political repression, that occurred during the 20th century. See Zbignew Brezinski's Out of Control, 1995, for an insightful accounting of the excesses of that now fortunately bygone era. We are on the sharply downsloping side of the global fatality curve, and we can thank information and communications technologies for that, more than any other single factor in the world.

Today, we live in the era of instant news, electronic intelligence and violence that is increasingly surgically minimized, by an increasingly global consensus. Even with our primitive, clunky, first generation internet and planetary communications grid, I feel our planet's technological immune systems have become far too strong and pluralistic, or network-like, for the scale of political atrocities of the twentieth century to ever recur. Yet conflict and exploitation will continue to occur, and we could certainly choose a dirty, self-centered, nonsustainable, environmentally unsound approach to the singularity. Catastrophes can and will continue to recur. I hope for all our sakes that they are minimized, and that we learn from them as rapidly and thoroughly as possible.

Unlike a small minority of aggressive transhumanists, I applaud the efforts we are making to create a more ecologically sustainable, carefully regulated world of science and technology. Wherever we can inject values, sensitivity, accountability into our sociotechnological systems, I think that is a wonderful thing. I'd love to see the U.S. take a greener path to technology development, the way several countries in Europe have. I'm also pragmatic in realizing that most social changes we make will be more for our own peace of mind, and would have little effect on the intrinsic speed of our global sci-tech advances, on the rate of the increasingly human-independent learning going on in the ICT architectures all around us.

I consider such moves to be more reflections on how we walk the path, choices that will in most cases do very little to delay the transition. I also do not think it is valuable to hold the perspective that we should get to the singularity as fast as we can, if that path would be anything other than a fully democratic course. There are many fates worse than death, as all those who have freely chosen to die for a cause have realized over the centuries. There are many examples of acceleration that come at unacceptable cost, as we have seen in the worst political excesses of the twentieth century. No one of us has a privileged value set.

So perhaps most importantly, we need to remember that the evolutionary path is what we control, not the developmental destination. That's the essence of our daily moral choice, our personal and collective freedom. We could chart a very nasty, dirty, violent, and exploitative path to the singularity. Or with good foresight, accountability, and self-restraint, we could take a much more humanizing course. I am a cautious optimist in that regard.

Christine Peterson recently told me that artificial intelligence represents the one future development about which she has the most apprehension. It can come the closest of any scenario to Bill Joy's "the future that doesn't need us." If the coming of the singularity means the ascendancy of machine intelligence and the end of the human era, shouldn't we all be doing what we can to prevent it from happening?

Ah yes, the Evil Killer Robots scenario.  Some of my very clever transhumanist colleagues worry quite a bit about "Friendly AI." I'm glad to have friends that are carefully exploring this issue, but from my perspective their worries seem both premature and cautiously overstated. I strongly suspect that A.I.s, by virtue of having far greater learning ability than us, will be, must be, far more ethical than us. That is because I consider ethics to be an emergent computational interdependence, a mathematics of morality, a calculus of civilization that is invariably discovered by all complex adaptive systems that function as collectives. And anything worthy of being called intelligent always functions as a collective, including your own brain. Today's cognitive scientists are discovering the evolutionary ethics that have become self-encoded in all known complex living systems, from octopi to orangutans, from guppies to gangsters. For more on this intriguing perspective, see such works as Robert Axelrod's The Evolution of Cooperation, 1985, Matt Ridley's The Origins of Virtue, 1998, and Robert Wright's Non-Zero, 2001.

This optimism isn't enough, of course. We humans had to go through a nasty, violent, and selfish phase before we became today's semi-civilized simians. How do we know computers won't have to do the same thing? I think the answer to this question is that at one level, Peterson's intuitions are probably right. Tomorrows partially-aware robotic systems and A.I.s will have to go through a somewhat unfriendly, dangerous phase of "insect intelligence." As Jeff Goldblum reminded us in David Cronenberg's, The Fly, insects are brutal, they don't compromise, they don't have compassion. Their politics, as E.O. Wilson's Sociobiology, 1975/2000 reminds us, are quite comfortable with brute force. That's a potentially dangerous developmental stage for an A.I. You wouldn't want that kind of A.I. running your ICU, or your defense grid. Or your nanoassembler machines.

But you would very likely let such a system run the robotics in a manufacturing plant, especially if evolutionary systems have proven, as they are already demonstrating today, to be far more powerfully self-improving, self-correcting, and economical than our top down, human-designed software systems. That plant, of course, would be outfitted and embedded within a much larger matrix of technological fire extinguishers, an immune system capable of easily putting out any small fires that might develop.

But with a learning curve that is multi-millionfold faster than ours, I expect that "insect transition" to last weeks or months, not years, for any self-improving electronic evolutionary developmental system. You can be sure these systems will be well watched over by a bevy of A.I. developers, and those few catastrophes that occur to be carefully addressed by our cultural and technological immune systems. It's easy to underestimate the extent and effectiveness of immune systems, they aren't obvious or all that sexy, but they underlie every intelligent system you can name. Computer scientist Diana Gordon-Spearsand others have already organized conferences on "Safe Learning Agents," for example, and we have only just begun to build world-modeling robotics. We're still several decades away from anything self-organizing at the hardware level, anything that could be "intentionally" dangerous.

We also need to remember that humans will be practicing artificial selection on tomorrow's electronic progeny. That is a very powerful tool, not so much for creating complexity, but for pruning it, for ensuring symbiosis. We've had 10,000 years of artificial selection on our dogs and cats. Their brain structures are black boxes to us, and yet we find very few today that will try to grab human babies when the parents are not looking. Again, those few that do are taken care of by immune systems (we don't continue to breed such animals, statistically speaking).

In short, I expect human society will coexist with many decades of very partially aware AI's, beginning some time between 2020-2060, which will give us ample time to select for stable, friendly, and very intimately integrated intelligent partners, for each of us. Hans Moravec (Robot, 1999) has done some of the best writing in this area, but even he sometimes underestimates the importance of the personalization that will be involved. As a species, humanity would not let the singularity occur as rapidly as it will without personally witnessing the accelerating usefulness of A.I. interacting with us in all aspects of our lives, modeling us through our LUI systems, lifecams, and other aspects of the emerging electronic ecology.

By contrast, every scenario of "fast takeoff" or A.I. emergence that I've ever seen, the heroic individual toiling away in the lab at night to create HAL-9000, just doesn't seem to understand the immense cycles of replication, variation, interaction, selection, and convergence in evolutionary development that are always required to create intelligence in both a bottom-up and top-down fashion. Since the 1950s, almost all the really complex technologies we've created have required teams, and there is presently nothing in technology that is as remotely complex as a mammalian brain.

As I mention on my website, I think we are going to have to see massively parallel hardware systems, directed by some type of DNA-equivalent parametric hardware description language, unfolding very large, hardware-encoded neural nets and testing them against digital and real environments in very rapid evolutionary developmental cycles, before we can tune up a semi-intelligent A.I. The transition will likely require many teams of individuals and institutions, integrating bottom-up and top-down approaches, and be primarily a hardware story, and only secondarily a software story, for a number of reasons.

Bill Joy, in Wired12.2003, notes that we can expect a 100X increase (6-7 doublings) in general hardware performance over the next ten years, and a 10X increase in general software (e.g., algorithmic) performance. While certain specialized areas, like computer graphics chips may run faster (or slower), on average this sounds about right. Note the order of magnitude difference in the two domains. Hardware has always outstripped software because, as I've said earlier, it seems to be following a developmental curve that is more human discovered than human created. It is easier to discover latent efficiencies in hardware vs. software "phase space", because the search space is much more directed by the physics of the microcosm. Teuvo Kohonen, one of the pioneers of neural networks, tells me that he doesn't expect the neural network field to come into maturity until most of our nets are implemented in hardware, not software, a condition we are still at least a decade or two away from attaining.

The central problem is an economic one. No computer manufacturer can begin to explore how to create biologically-inspired, massively parallel hardware architectures until our chips stop their magic annual shrinking game and have become maximally-miniaturized (within the dominant manufacturing paradigm) commodities. That isn't expected for at least another 15 years, so we've got a lot of time yet to think about how we want to build these things.

If I'm right, the first versions of really interesting A.I.s will likely emerge on redundant, fault tolerant evolvable hardware "Big Iron" machines that take us back to the 1950s in their form factor. Expect some of these computers to be the size of buildings, tended by vast teams of digital gardeners. Dumbed-down versions of the successful hardware nets will be grafted into our commercial appliances and tools, mini-nets built on a partially reconfigurable architecture, systems that will regularly upgrade themselves over the Net. But even in the multi-millionfold faster electronic environment, a bottom-up process of evolutionary development must still require decades, not days, to grow high-end A.I.. And primarily top-down A.I. designs are just flat wrong, ignorant of how complexity has always emerged in physical systems. Even all of human science, which some consider the quintessential example of a rationally-guided architecture, has been far more an inductive, serendipitous affair than a top-down, deductive one, as James Burke (Connections, 1995) delights in reminding us.

So, when one of the first generation laundry folding robots in 2030 folds your cat by accident, we'll learn a tremendous amount about how rapidly self-correcting these systems are, how quickly, with minor top-down controls and internet updates, we can help them to improve their increasingly bottom-up created brains. Unlike today's still-stupid cars, for example, which currently participate in 40,000 American fatalities every year, tomorrows LUI-equipped, collision avoiding, autopiloting vehicles will be increasingly human friendly and human protecting every year. This encoded intelligence, this ability to ensure increasingly desirable outcomes, is what makes a Segway so fundamentally different from a bicycle. Segway V, if it arrives, would put out a robotic hand or an airbag to protect you from an unexpected fall. So it will be with your PDA of 2050, but in a far more generalized sense.

In a related point, I also wouldn't worry too much about the loss of our humanity to the machines. Evolution has shown that good ideas always get rediscovered. The eye, for example, was discovered at least thirty times by some otherwise very divergent genetic pathways. As Simon Conway Morris eloquently argues (Life's Solution, 2003) every single aspect of our human-ness that we prize has already been independently emulated to some degree, by the various "nonhuman" species we find on this planet. Octopi are so smart, for example, that they build houses, and learn complex behavior (e.g., jar-opening) from each other even when kept in adjacent aquaria.

This leads us to a somewhat startling realization. Even if, in the most abominably unlikely of scenarios, all of humanity were snuffed out by a rogue A.I., from a developmentalist perspective it seems overwhelmingly likely that good A.I.s would soon emerge to recreate us. Probably not in the "Christian rapture" scenario envisioned by transhumanist Frank Tipler in The Physics of Immortality, 1997, but certainly our informational essence, all that we commonly hold dear about ourselves.

How can we even suspect this? Humanity today is doing everything it can to unearth all that came before us. It is in the nature of all intelligence to want to deeply know its lineage, not just from our perspective, but from the perspective of the prior systems. If the world is based on physical causes, then in order to truly know one understands the world, one must truly know, and be able to understand at the deepest level, the systems in which one is embedded, the systems from which one has emerged, in a continuum of developmental change. The past is always far more computationally tractable than what lies ahead.

That curiosity is a beautiful thing, as it holds us all tightly interdependent, one common weave of the spacetime fabric, so to speak.

That's why we are already spending tens of millions of dollars a year trying to model the way bacteria work, trying to predict, eventually in real-time, everything they do before they even do it, so that we know we truly understand them. That's why emergent A.I. will do the same thing to us, permeating our bodies and brains with its nanosensor grids, to be sure it fully understands its heritage. Only then will we be ready to make the final transition from the flesh.

Also on your website, I read that the singularity will occur within the next 40 to 120 years.  Isn't that kind of broad range? What's your best guess on when it will occur?

I find that those making singularity predictions can be usefully divided into three camps: those predicting near term (now to 2029), mid-term (2030-2080), and longer term (2081-2150+) emergence of a generalized greater-than-human intelligence. Each group has somewhat different demographics, which may be interesting from an anthropological perspective.

I think the range is so broad because the future is inherently unpredictable and under our influence. It is also true that none of us has yet developed a popular set of quantitative methodologies for thinking rigorously about these things. Very little money or attention has been given to them. If you'd like to send a donation to our organization to help in that regard, let us know.

From my website: "Most estimates in the singularity discussion community, intuitive as they all are at this early stage, project a generalized human-surpassing machine intelligence emerging circa 2040, give or take approximately 20 years. This puts many singularitarians on the 2020 end, and several of the older, more conservative prognosticators on the 2060 end. My own early guesstimation leads me to expect a circa 2060 singularity, though my confidence interval is wide (20 years per standard deviation) as I believe the arrival depends, within a human generation or two either way, on the choices we make. To significantly accelerate its arrival, most important may be our political, economic, social, and personal choices in regard to science and technology education, innovation, research, and development. To significantly delay its arrival, we have many more possibilities, none of which I need go into here."

Using this simple model, I feel 68 percent confident that it will happen between 2040 and 2080, and 95 percent confident it will occur between 2020 and 2100. But again, these are only rough estimates at this stage. A very large number of mostly bottom-up and secondarily top-down innovations in hardware, and to a lesser degree, software, will apparently be needed. As we approach this fantastic challenge, we will certainly also continue to gain major insights from top-down theory and bottom-up experimentation in such fields as neuroscience, cognitive science, and evolutionary developmental biology, as well as numerous other domains I discuss under degree programs for singularity studies.

Do you take the position that we can make no meaningful statements about what may happen after the singularity occurs? Or, if we can at least speculate about it, what is your best guess as to what life will be like in a post-singularity world?

As I've described above, I think that there are a number of simple, global statements we can make about the developmental course that the universe must take after the singularity emerges. It seems a very good bet, for example, that tomorrow's technological intelligences will be fully constrained by the laws of physics in this universe, both the majority that I feel are known and that much smaller set that remains undiscovered. That constraint already tells us volumes about what they'll be doing in their exploration of our increasingly informationally and energetically barren universe.

I think Stephen Weinberg (Dreams of a Final Theory, 1993) is right, that we are within just a few decades (or perhaps generations) of understanding all the functional elements at the bottom end of this finite universe. And I think Lee Smolin and the string and M-theorists are right (Three Roads to Quantum Gravity, 2002), that we are close to an understanding of the large scale structure of spacetime, and to unifying it with the quantum domain. All that will remain at that point, as Ian Stewart and Paul Davies would say, is what's left in the middle, not the zone of the very large, or of the very small, but of the "very complex," the unique combinations that accelerating computational systems can construct locally out of the universal rules and forces that we are stuck with. I strongly suspect that tomorrow's A.I.s will be unable to generally reverse entropy within this universe. They'll likely find it impossible to engage in time travel within this universe. That goes for many of the other extreme and causally illogical things we've occasionally heard from mathematical physicists and sci-fi authors with active imaginations.

As I've mentioned before, I think they'll be constrained to be ethical, to be information seekers, and to rapidly enter a black hole transition (the developmental singularity hypothesis). But this tells us little about the evolutionary uniqueness of their path, other than that it will have intricacies within it that we cannot comprehend.

We'll also have plenty of decades to see if persuasive computing, personality capture and the humanizing AI scenario emerges, as described earlier, long before the singularity occurs. If machine intelligence does develop along the lines predicted, I think it's pretty clear that when the A.I. arrives, they will be we, just natural extensions of ourselves. In that world, as Hans Moravec was perhaps the first to remind us (Mind Children, 1988), it seems very likely that all local intelligence will jump to a postbiological domain. Soon after that, I suspect, we may transition to a postuniversal domain.

That seems a very natural transition, to me.

You’ve placed a good deal of emphasis on academia, specifically on degree programs related to the study of the singularity.  Why is this so important?

To develop any kind of foresight, we need to study. If the biological sciences have taught us anything in the last century, its that the difference between evolution and development in living systems is one of the last great mysteries. With careful effort, we will tease out that special, simple, developmental component, and understand how development uses evolution in all complex systems.

I believe developmental insights in a wide range of fields will revolutionize the study of accelerating change. We need an Einstein of Information Theory, someone who can place what Damien Broderick (The Spike, 2002) and I call singularity studies on a broad academic foundation, and attract many bright minds to the study of the amazing transition ahead. That won't be me, as I don't have all the quantitative and qualitative skills that I think will be necessary. But I can play Galileo to someone else's Newton.

Academia isn't the only solution to charting a safe singularity, but in partnership with government, business, and dedicated individuals it is one of the important pieces of the puzzle.

When I heard you speak recently, I was surprised by what you had to say on the question of whether we’re alone in the universe.  In the end, do you think that our universe will be occupied by any intelligence other than human intelligence or its descendants?

As I've mentioned earlier, I think all universal intelligence follows a path of transcension, not expansion. This has to do with such issues as the nature of communication in complexity construction (two-way, with feedback, is relentlessly preferred), the large scale structure of the universe (which puts huge space buffers between intelligences) and the small scale structure of the universe (which rewards rapid compression of the matter, energy, space, and time necessary to do any computation).

Fortunately, this perspective is quite falsifiable by future advances with SETI. If I'm right, in just a few more decades as the Moore's law-driven sensitivity of our sensor systems continues its exponential growth, we'll begin discovering "radio fossils" in the night sky, emissions of very weak electromagnetic signals (radio, TV, etc.) unintentionally emitted from the older intelligence-bearing planets whose past developmental record should already be detectable in our galaxy.

We began sending such signals out to space with the birth of powerful radio in the 1920's. If we assume our civilization enters a developmental singularity circa 2150, after which transmissions cease, this allows an average of 200 years of transmission time, out of a stellar lifetime of 12 billion years. Seth Shostak has estimated 400 billion sunlike stars in our galaxy, and we will assume half of these, 200 billion, harbor Earth-like planets. Two-thirds of these planets are older than our Earth, closer to the galactic core, and so further along in their technological development than we are today. That gives (200/12 billion) * 200 billion * 2/3 = 2,200 radio fossils patiently waiting to be discovered in the night sky. I've described this further in a short 2002 Journal of Evolution and Technology article on the Fermi Paradox, so I refer you to that if you'd like to further explore these interesting ideas. 

Once our antennas are powerful enough to detect unintentional EM emissions from the closest few million stars, something that Frank Drake tells me is almost possible now with the closest of our neighboring stars, we'll begin to discover these unmistakable signatures of nonrandom intelligence. We will also notice that every year, a small fraction (roughly 1/200th) of these radio fossils suddenly stop sending signals. Like us, these will be civilizations whose science invariably discovers that the developmental future of universal intelligence is not outer space, but inner space.

That's the destiny of species.

 

[ Thanks to Elen Burton, Jose Cordiero, Ryan Elisei, Michael Hartl,Neil Jacobstein, John Peterson, Chris Phoenix, Wayne Radinsky, and Wendy Schultz for valuable comments and ideas. ]

John Smart is a developmental systems theorist who studies science and technological culture with an emphasis on accelerating change, computational autonomy and a topic known in futurist circles as the technological singularity. He is chairman of the nonprofit Institute for Accelerating Change (IAC) whose websites (Accelerating.org, SingularityWatch.com) aim to help individuals better understand and manage the physical and computational phenomenon of accelerating change. John lives in Los Angeles, CA and may be reached at feedback{at}accelerating.org.

If you have an interest in a multidisciplinary understanding of accelerating change, you are invited to join IAC's free quarterly newsletter, Accelerating Times.


UPDATE: John Smart has published a nicely organized and illustrated version of this interview on the Institute for Accelerating Change website.

Also see Speaking of the Future with...

Rand Simberg | Nina Paley | Phil Bowermaster | Michael Anissimov | Ramona | Robert Zubrin | Alex Lightman | Aubrey de Grey

Posted by Phil at December 4, 2003 04:45 PM | TrackBack
Comments

I'm still breathless just having read your article. It reflects the intuitions I inherited from my physicist father. I've been an electrician for 30 years and still looking for another "job". This article is poetry.

Posted by: David Brooks McLane at April 2, 2004 02:23 PM
Post a comment









Remember personal info?