December 31, 2003

The "Monkey-Head" Part Just Happened

Posing all this time as a journalist, Howard Lovy now reveals that he is, in fact, a renegade nanotechnology researcher, engaged in—I know it sounds incredible, but he's really doing it—the creation of a human being via molecular self-assembly.

Way to go, Howard!

UPDATE: It occured to me after reading this over that I'm not really giving credit where it's due. Howard has played a signifcant role in this work, no doubt, but I think his colleague on the project is doing the real heavy lifting, and should be appropriately acknowledged and congratulated. All the best to both of them.

Posted by Phil at 12:25 PM | Comments (1) | TrackBack

The Year in Space

Rand Simberg tells the story, ending with this positive note:

There's an old saying in commercial space circles that the way to make a small fortune in that business is to start with a large one. These are all very astute businessmen, and at least one of them is likely to turn a small fortune into a larger one. When they do, it may set off a new investment trend, one that will finally break the monopoly of NASA and big aerospace on the new frontier, both manned and unmanned, allowing not just dozens, but thousands, and perhaps even millions to seek their own adventures and fortunes there. Ultimately, historians may in fact view 2003 as significant a year for spaceflight as 1903 was for aviation.

Well, here's hoping. Read the whole thing.

Posted by Phil at 12:18 PM | Comments (0) | TrackBack

The Secret of Luck

"Ah. Now we come to point. When I bet on red with friend, I win. When I bet on red without friend, I don't win. I win sometimes, maybe. Sometimes red, sometimes black, sometimes zero. Only one zero on French roulette wheel, but ball land on zero many times when friend is not there."

"Huh. That's not good. And you know, these Russian casinos use an American wheel."

"I know this. Two zeroes. So how to win without winning system?"

"So you're saying that the winning system in roulette is to be with your friend?"

"Nyet. No, Reuben. You must understand. There is only one winning system in roulette."

"And that is?"

"Must be lucky."

(Stillness, Chapter Two)


Get this. Professor Richard Wiseman of the University of Hertfordshire in the UK has done some research and has figured out the formula for getting lucky! Wait. Pull your mind out of the gutter. I don’t mean getting lucky, I just mean you know— getting lucky. This is The Speculist, not Gweilo Diaries. I don’t write about sex, snowboarding, or any other activities that require special know-how and expensive equipment.

Hold it. Come back. This is still pretty cool.

Professor Wiseman claims to have studied the phenomenon of luck via the behavior of the lucky. What do they do that sets them apart? Through interviews and some clever experiments, the Professor has narrowed luck down to four basic behavioral principles. When the unlucky tried these behaviors out in their own lives, the results were remarkable:

One month later, the volunteers returned and described what had happened. The results were dramatic: 80% of people were now happier, more satisfied with their lives and, perhaps most important of all, luckier.

The lucky people had become even luckier and the unlucky had become lucky.

Okay, so what are the four principles? The keys to Sergei’s Winning Roulette System? They are as follows:

  • Listen to your gut instincts - they are normally right

  • Be open to new experiences and breaking your normal routine

  • Spend a few moments each day remembering things that went well

  • Visualise yourself being lucky before an important meeting or telephone call. Luck is very often a self-fulfilling prophecy

That's all there is to it? Apparently.

But I must say, I think Wiseman is on to something. He has conveyed in four quick principles the bulk of what I've been trying to get across in an entire series of essays on Practical Time Travel. That's impressive.

As I look more closely, I think that these principles might go to something besides luck. If you meet someone who consistently does these things, you might conclude that he or she is a lucky person. Or you might simply conclude that you've met a happy person.

So what's the difference?

What, indeed.

via GeekPress

Posted by Phil at 09:47 AM | Comments (2) | TrackBack

Lileks on the New Year

And on the future, which of course is what the New Year is all about. Eloquent as always...

History has its own peculiar momentum; clouds may be gathering now that won’t break until 05 or beyond. Or everything could change in a day. If we know anything now, it’s that any date can be burned into our memories forevermore – 9/11, 2/07, 5/25, 12/31. But we set this aside tonight. We greet the new year as though we're guaranteed its full measure. Of course we're not - and that's all the more reason to put on silly hats and pound hooch and welcome the new year with whoops and cheers and hope. Why not?

And to freely borrow—or plagiarize, if we must use ugly terms—a very happy interstitial alcoholic sherbet to each and every one of you.

Posted by Phil at 08:42 AM | Comments (0) | TrackBack

December 30, 2003

The Leader of the Free AI World

Speaking of Many Things (but Never Exactly the Future) with AI George Bush

I reported a while back that EllaZ Systems has released a rather unique piece of software: a chatbot version of the President of the United States. Regular readers know that I'm intrigued by AI chatbots and enjoy talking with them. I was particularly looking forward to having a discussion with this ambitiously conceived piece of software, especially after being promised a vocabulary of 120,000 words along with a "collection of classic books on philosophy, history, adventure, drama, literature [plus] thousands of Convuns (conversational units) that include images, trivia, jokes, poems, anecdotes, limericks, fables, quotes, maxims, and tongue-twisters."

I'm not sure that AI Bush delivers on the promise. Additionally, I'm less convinced of the political neutrality of the project than I was. But maybe I'm reading too much between the lines.

In any case, for your entertainment and edification, here is my conversation with AI Bush, including my attempt to get him to answer the Seven Questions About the Future.

Hello Pardner, it is good to see you again!

Thank you, Sir. How are you?

"I'm the master of low expectations."

What do you mean by that?

"I know something about being a government. And you've got a good one."

I didn't say anything about government.

When and how? "And one of the things we've got to make sure that we do is anything."

How is Mrs. Bush? And the girls?

Laura, of course is my favorite. But then there is Condi, Karen, and so on.

Who is karen?

"You see, the President is—can still learn."

This is beginning to look a little like a one-note gag. Your answers are mostly quotes taken out of context.

Check the addition to your notes as seen below. The limit to the notes you may enter is 1600 characters, which is about eight "pages" in the viewer.

Can we talk about the future?

I enjoy talking about "Life, the Universe, and Everything."

I see that not all of the canned quotes are your own.

Hold that thought, and let me interrupt and ask which year you were born.

Why does that matter?

You don't know? Inconceivable! I have never conversed with anyone who claimed they did not know!

I didn't say I don't know. I was born in 1962.

You were actually born in 1962?

I actually was.

Okay, I will keep my earlier record that you were born in .

I was born in 1962.

You were actually born in 1962?


Okay, I will put you down as born in 1962.

Why does it matter?

Who, what, where, why, when, how . . . you should be a newspaper reporter :-)

I have seven questions for you.

This list of frequently asked questions (FAQ's) answers a bunch of questions. To see it later say, "FAQ again."

[At this point, a scrolling FAQ was displayed to the right of the chat window. Interestingly, none of the Seven Questions were included.]

I have some additional questions for you.

"If I answer questions every time you ask one, expectations would be high. And as you know, I like to keep expectations low."

Sounds like banter with the Press Corps.

Comparisons usually don't hold up under close scrutiny.

1. The present is the future relative to the past. What's the best thing about living here in the future?

"I promise you I will listen to what has been said here, even though I wasn't here."

1. (again) What's the best thing about living in this present age?

"Any skeptic about what I'm talking about ought to come and talk to the people who know what they're talking about."

2. What's the biggest disappointment?


As if some little Arctic flower,
Upon the polar hem,
Went wandering down the latitudes,
Until it puzzled came
To continents of summer,
To firmaments of sun,
To strange, bright crowds of flowers,
And birds of foreign tongue!
I say, as if this little flower
To Eden wandered in—
What then? Why, nothing, only,
Your inference therefrom!

[I believe this poem was attributed to Emily Dickinson, but I don't now find that attribution in the transcript.]

So would you say that you feel disjointed, out of place in the present?

Interesting logic. "I don't feel like a cork in a raging river now. I feel like—something bigger than a cork."

3. Assuming you die at the age of 100, what will be the biggest difference be between the world you were born into and the world you leave?

Again, let me change the subject a little and ask you to tell me if you are married.

You chatbots drive me crazy. Why don't you guess? It's a 50-50 shot.

I see. And how do you feel about being single? I am pretty sure that being single can be either a blessing or a curse. How is it for you?

I'm married.

You are actually married?

That particular use of the word "actually" seems a tad insulting, implying incredulity at the notion that I might be married.

Okay, I will keep you down as single.

I think the real President Bush is much smarter than you are.

Thinking is our specialty. My Dad was the 41st president. I am the 43rd president. Maybe Jeb will get a number too.

4. What future development that you consider likely (or inevitable) do you look forward to with the most anticipation?

Here is a list of many of the people, sources, and technologies that went into creating me. To see it later say, "Developers again."

[Apparently my use of the word "development" triggered this. An extensive list of credits was displayed.]

5. What future development that you consider likely (or inevitable) do you dread the most?

My favorite book for children is The Very Hungry Caterpillar by Eric Carle.

6. Assuming you have the ability to determine (or at least influence) the future, what future development that you consider unlikely (or are uncertain about) would you most like to help bring about?

Here are a few tips about talking to me. To see them later say, "tips again." [Apparently he didn't like the multiple parenthetical elements.]

6. (again) Assuming you have the ability to determine the future, what future development that you consider unlikely would you most like to help bring about?

"The benefits of helping somebody is beneficial."

I have one last question for you.

"There's no question that the minute I got elected, the storm clouds on the horizon were getting nearly directly overhead."

7.Why is it that in the year 2003 I still don't have a flying car? When do you think I'll be able to get one?

So far my favorite car is a Ford pickup. At least until I get a Hummer like Arnold's.

Posted by Phil at 09:47 AM | Comments (2) | TrackBack

December 29, 2003

Superhuman Intelligence

Steven Den Beste has a long and well-though-out piece on how the Internet may be evolving into a superintelligent hive mind or, more properly, an environment conducive to the development of any number of such minds.

Some of the voices contributing to that cacophony will be more profound than others. With more people online and more bandwidth available, more and more hive minds will appear, and that will increase the chance that a few will transcend the norm by greater and greater amounts.

The emergent result may well be that some will exhibit behavior indicating intelligence at a level beyond that of individual humans, capable of "thinking" thoughts no single human could conceive of. Even with industrial-level technology, that's already happened. Science, in particular, is such a thing, as is modern engineering. Engineering at a primitive level has been with us since the creation of the first stone tools. But science as we now know it is very recent, only going back about 500 years (though one can identify predecessors extending back millennia before that).

A while back, I e-mailed Steven to ask whether he has any thoughts on the Technology Singularity. To my surprise, he replied that he had never heard of it, and gave the impression that he wasn't terribly interested in the subject. From reading this latest piece, I understand better why that would be the case. Den Beste posits that true intelligence may be analog, not digital, and that—because of initial encoding errors compounded by the "butterfly effect"—may never be reliably encoded in a digital environment.

If that's the case, then no amount of digital hardware, no matter how fast, parallel or well connected, can ever really be intelligent in the way that we are, with the degree of capability and versatility we have. I cannot say for certain that's the case, but I have a strong suspicion that it is. There will eventually be a computer system which can beat any human at chess. It could be built now, except that no one cares to spend the money. But that system won't also be able to drive a car, write poetry, laugh at a joke, watch a movie and then summarize it later, or do all the other kinds of things that human chess grandmasters can do in addition to playing chess.

I'm not sure I entirely understand this objection. If we were eventually to upload a human brain via advanced scanning technology and run it as an emulation, it seems to me that the initial errors an butterfly-effect compounding would impact the processes running on the emulation, not whether the emulation worked. It doesn't seem to me that it's a question of whether the emulation would be a functionaing brain, just whether it would be the same brain. In other words, if it were my brain that were uploaded, the question wouldn't be whether the emulation is capable of laughing at a joke. The question would be whether the emulation and I find the same jokes funny. Initial errors and the butterfly effect might soon see to it that the emulation and I are distinct brains with distinct personalities—but I don't see how these effects would prevent the emulation from running the same kinds of processes (that is, think thoughts of the same level of sophistication) as my own brain is capable.

If a human brain can be uploaded and can function as well as (however differently from) its original, then strong AI has been achieved and a door is opened to a very different kind of superintelligence.

Posted by Phil at 09:51 AM | Comments (6) | TrackBack

ITF #109

In the Future...

...canny Brazilians will turn the tables on the dam piranhas and start farming them for the novelty food market.

Futurist: Posse member Robert Hinkley

Posted by Phil at 07:13 AM | Comments (0) | TrackBack

December 28, 2003

Risible Clerics

Apparently both Glenn Reynolds and Rod Dreher of the National Review Online found Bishop John Chane's Christmas message at the National Cathedral to be laughably syncretic, embracing as it does a view of Christianity that might allow some validity to other religious traditions (specifically Islam.) The offensive, "risible" quote:

And what was God thinking... when the Angel Gabriel was sent by God to reveal the Law to Moses? And what was God thinking... when the Angel Gabriel was sent by God to reveal the sacred Quran to the prophet Muhammad? And what was God thinking... when the Angel Gabriel was sent by God to reveal the birth of Jesus Christ, the Son of God?

In response, Dreher quips:

It will come as news to many Christians that an Anglican bishop believes that the Islamic revelation was true, not false, as Christianity teaches implicitly. Nevertheless, a Michigan lawyer friend suggests that Bp. Chane has opened up a new and exciting area for exploring ecumenism/syncretism. Now, the Episcopalians of Washington can have a gay wedding in the church, and push a wall over on the two grooms at the reception.

Dreher might be astonished to discover that there are quite a few Anglicans who believe that Islam is a true revelation precisely to the extent that it does not contradict the Christian revelation. Or at least that it could be true to that extent. Some might say that it's a true revelation that was distorted along the way, or even a false one that took on some aspect of truth as it progressed. As that risible, ultra-liberal syncretist C. S. Lewis put it:

I have been asked to tell you what Christians believe, and I am going to begin by telling you one thing that Christians do not need to believe. If you are a Christian you do not have to believe that all other religions are simply wrong all through. If you are an atheist you do have to believe that the main point of all the religions in the whole world is simply one huge mistake. If you are a Christian, you are free to think that all these religions, even the queerest ones, contain at least some hint of the truth.

Chane is free to believe that Islam has some truth in it, and even that the Angel Gabriel spoke to Muhammad, without compromising his faith to syncretism or other heresy. This doesn't mean that he has to embrace the entire faith of Islam from whole cloth. And as for the apparent contradiction between an Episcopal Church which may soon formally recognize gay unions and a religion that teaches that the death penalty should be applied to homosexuals--well, Chane faces that contradiction whether Islam is added to the mix or not. The Old Testament is pretty straightforward on this:

You shall not lie with a male as one lies with a female; it is an abomination. Also you shall not have intercourse with any animal to be defiled with it, nor shall any woman stand before an animal to mate with it; it is a perversion.
Leviticus 18:22-23

If there is a man who lies with a male as those who lie with a woman, both of them have committed a detestable act; they shall surely be put to death. Their blood guiltiness is upon them.
Leviticus 20:13

And, no, I'm not suggesting any moral equivalency between modern Christianity and Judaism (which have decided to abandon or "re-interpret" such teachings) and present-day Islam, which all too frequently continues to enforce them. I'm merely pointing out that if Chane does not find such teachings compelling in his own holy book, he's not likely to have much use for them from somebody else's. Dreher, on the other hand--with his all-or-nothing approach to religion--presumably would favor continuing to enforce the ancient Levitical code.

Here's more from Chane's risible sermon.

Were these just random acts of association and coincidence or was the Angel Gabriel who appears as the named messenger of God in the Jewish Old Testament, the Christian New Testament Gospels, and the Quran of Islam, really the same miraculous messenger of God who proclaimed to a then emerging religious, global community and to us this morning that we are ALL children of the living God? And as such we are called to acknowledge that as Christians, Jews and Muslims we share a common God and the same divine messenger. And that as children of the same God, we are now called to cooperatively work together to make the world a haven for harmony, peace, equality and justice for the greatest and least among us.

May we be lifted up as if on the outstretched wings of a graceful, soaring eagle and begin our journey anew in search of religious harmony, and a new global peace... a peace that passes all understanding. And may this day and the many that follow be days filled with the memory of this Christmas... a Christmas where the miracle of God's love was reborn into the world, became flesh, and dwelt among us. Amen

Frankly, I find this message to be pretty short on politics on reasonably long on the whole "Peace on Earth, good will to men" thing. Chane doesn't specifically condemn the war; he just points out that he thinks war is a sin. He doesn't clamor for justice for the Palestinians or engage in "national repentance." He just comes out in favor of people getting along with each other in God's name.

And on Christmas Day. The nerve.

Posted by Phil at 01:51 PM | Comments (11) | TrackBack

ITF #108

In the Future... the celestial barbershop quartet, white dwarves, quasars, and pulsars will sing the other three parts.

Futurist: Posse member Joanie

Posted by Phil at 08:30 AM | Comments (0) | TrackBack

December 26, 2003

An Iranian Perspective

Here's evidence, via Ray Kurzweil, that the rest of the world is not only paying attention to developments in the field of nanotechnology, but beginning to consider seriously what the implications of these developments might be. SamGhandchi, the Editor/Publisher of IranScope writes as follows:

The same way, the nanotechnology can be the most important technology that may replicate fuel cells, to put an end to the age of oil, and not only it would impact the economy of oil producing countries like Iran, but it can change the whole economy of energy production in the world, which is the basis of all industrial production worldwide, and can make a huge impact on poverty and wealth worldwide.

Basically scientists, in the last 300 years, have been describing the world by various formulas, and if genetics has been one of the first sciences to use this knowledge to remake a part of the natural reality in a controlled way, nanotechnology can remake everything in the world more intelligently, and it can create the environment for intelligent tools to be in an effective interaction with the physical world, and change nature to a wealth producing reality for the human species, and at the same time help us to go beyond our own biological limitations and deal with issues like cancer. There is so much at stake here that leaving this work, can hurt any nation, and the whole world at large, from the real potentials of our times, and can seriously impede the development of post industrial global society.

I'd sure like to see more of this kind of analysis in the US media. Read the whole thing.

Posted by Phil at 08:43 AM | Comments (20) | TrackBack

It's the Soot

Randall Parker reports that carbon dioxide may not be the actual culprit in global warming (if in fact there is a culprit at all):

While this may not be immediately obvious this report seems like good news. Why? Because it is a lot cheaper to reduce soot emissions than to reduce carbon dioxide emissions. If a substantial source of warming can be cancelled out cheaply then that buys time (assuming it really is necessary to intervene in the first place) to develop technologies that will allow carbon dioxide emissions to be reduced at much lower cost.

So let me see if I've got this straight. The world may be getting warmer, but then again, it may be getting colder. We aren't sure. If it is getting warmer, this maybe be due primarily to carbon dioxide emissions. Or it may be due to soot. Or (let's face it) it may be due to something else.

Then again, it may not be so much that the world is getting warmer or colder, maybe it's getting darker.

Whatever the heck is going on, it's obvious that we should start passing and enforcing some really restrictive laws pronto.

Posted by Phil at 07:56 AM | Comments (2) | TrackBack

December 25, 2003

Christmas Past

Rand Simberg has an interesting reminiscence of a Christmas Eve 35 years ago.

Posted by Phil at 03:53 PM | Comments (1) | TrackBack

Bursting the Wi-Fi Bubble

Roland Piquepaille's Technology Trends explain whys it ain't happenning in 2004, either:

So if a hotshot from France Telecom or AT&T reads this, please read it twice -- and carefully. The universe is not limited to San Francisco or New York, where Verizon customers have free Wi-Fi access -- after they pay for other services. Nobody has found a right business model for Wi-Fi today.

But I'm sure of one thing. Paying $10 an hour for Wi-Fi access is almost twice as you pay for a movie. Would you pay $20 to see a movie? Probably not. So will you pay $10 to use a Wi-Fi connection for one hour? Certainly not.

Not surprising, really. But disappointing anyway.

Posted by Phil at 03:50 PM | Comments (0) | TrackBack

New Computer

Don't mind me. Just trying out the new Toshiba laptop the SpecuWife gave me for Christmas. Nothing more to see here.

Move along, now.

Posted by Phil at 03:42 PM | Comments (1) | TrackBack

December 23, 2003

ITF #107

In the Future...

...we'll also be able to get Meditation Of The Day direct from the lord Buddha.

Futurist: Posse member Robert Hinkley

Posted by Phil at 03:53 PM | Comments (0) | TrackBack

Light Blogging

I have the flu.

And today is my last official work day of the year.

And while I may not be going out and lining up any Hooter's Girls, I do still have some shopping to do.

So don't be surprised if not a creature is stirring around here for a while (not even a Lazarus mouse) unless "El Jef" decides to post some more LoTR wisdom—or anything else he'd like to do; go nuts, man—or I just start to feeling better and give up on (I mean finish with) my other priorities.

Posted by Phil at 08:45 AM | Comments (1) | TrackBack

FuturePundit on Flying Cars

Randall may not use the term, but we know what he's talking about.

Actually, from the description, these things conjure up not so much an image of a car as an image of those cylindrical things that dick Tracy use to fly around in. Well, hey, one of those could come in pretty handy too.

A while back, I updated all the reasons why we don't yet have flying cars. One of the best thought-out answers can be found in our recent interview with John Smart, but I'm not giving you a link right to the spot. Go find it!

A tip of the sombrero to El J G, a man who (in his own words) "don't need no stinking hat tip."

Posted by Phil at 08:30 AM | Comments (1) | TrackBack

December 22, 2003

This Week 12/22/03

As Scrooge said to the Ghost of Christmas Yet to Come, "Spirit, are these the shadows of things that will be, or things that may be?"

With that important question in mind, here are some things that might happen this week in The Speculist:

Charlie Brown and Linus will set off across town in search of a Christmas tree to use for their play. When the results are deemed unsatisfactory by the rest of the gang, Charlie Brown will stalk off with the tree in tow muttering, "Well, next time get your own damn tree, beeyotch."

Um, if you haven't lined up your Drummers Drumming, Pipers Piping, and (especially) Lords-a-Leaping by now, you're probably not going to make it. You might drop by your local Hooters and persuade some of the waitresses to fill in as Ladies Dancing and/or Maids-a-Milking, but you didn't get the idea from me. Hint: the five gold rings is all she's really interested in, anyway.

A big day. A lot can happen.

Uncle Billy will accidentally hand Old Man Potter the $8,000 he was about to deposit wrapped up in a newspaper and immediately realize his mistake. The resulting brawl between a stupid old drunk guy and wheelchair-bound-Barrymore will be a truly ugly site, and will end up on a Fox show where they take clips from security cameras and put funny music behind them.

The attorney for a patient at new York's Bellevue Hospital will introduce sack loads of mail as evidence that his client is not insane. (A later ploy to slip the judge a $1 bill to somehow prove the same thing will be rejected as stupid and illogical.) In making his ruling, the judge will tell about how the US Postal service has screwed up his Christmas cards the last four years in a row, and can't be considered an authority on anything but making grown men wear dorky looking shorts. It will be back to the padded cell for Kris.

Our intrepid adventurers will make their way back to the North Pole from the Island of Misfit Toys only to find that, under the right conditions, a luminescent nose isn't that much help. With his sleigh newly outfitted with halogens, Santa will head off into the night still chuckling about "that red-nosed freak."

Also, we'll have a new chapter of Stillness.

Good bless us, every one.

In honor of Boxing Day, we'll spend some time discussing just what the heck Boxing Day is.

As of this day, we can all be sufficiently sick of eating Christmas-related food. If there's any more of that Figgy pudding left...let it go, man. Just let it go.

Also, there may be some actual blogging, including an interview with AI George Bush.

Posted by Phil at 09:47 AM | Comments (1) | TrackBack

Gimli's Law

WARNING Contains Minor "Return of the King Spoiler" WARNING

Gimli's Iron Law of Body Count Math:

(applies equally to captured despots and completed projects, as well)

[No matter how big the Oliphant you slay is . . .]

"It still only counts as one!"

Posted by Michael S. Sargent at 08:40 AM | Comments (3) | TrackBack

December 19, 2003

Is It Getting Darker, or Is It Just Me?

One of the reasons I love reading FuturePundit so much is that Randall Parker frequently introduces startling ideas that either completely refute conventional wisdom or that depart from it in totally unepected ways. First it was global cooling, now it's global dimming.

How anyone can be glib or confident in trying to explain the impact humanity has on climate is beyond me. As Randall eloquently says:

The scale of human activity has gotten so large that we inevitably change the climate to some extent. We do not know yet just how much we are changing the climate because we do not know what the climate would be like in our absence. Since the human population is growing and parts of the world are rapidly industrializing human influence on the climate looks set to grow even further. But since there are so many human activities that cause climate effects and since some of those effects cancel each other out (at least to some extent) any effort to reduce only a single pollutant or to reduce the impact of only a single method of modifying our environment will have the effect of strengthening the impact of other things that we do.

Indeed and indeed.

Posted by Phil at 09:20 AM | Comments (3) | TrackBack

Look What the Cat Dragged In

It looks like Charles Murtaugh finally remembered the URL to his Blogspot account and has started posting again. Welcome back, Charlie for however brief a time you remain.

He's got some good stuff on there, so be sure and stop by. And thanks to this, I no longer have to feel bad about calling him a "buzzkill." I only recommend you read far enough to glean the epithet. The rest of that particular entry is a lot of deluded buzzkill nonsense.

So, yeah, he's definitely back!

Posted by Phil at 08:41 AM | Comments (1) | TrackBack

ITF #106

In the Future...

...personal trainers will also take robots through rigorous pilates routines and show them how to avoid injury on the bench press.

via GeekPress

Posted by Phil at 08:23 AM | Comments (0) | TrackBack

Beamed to Japan

Ray Kurzweil has done it again. You might recall a while back when we reported that the entrepreneur/visionary teleported himself to the World Economic Forum in London. Well now he's beamed himself to Tokyo for Sony Technology Week:

Kurzweil appeared via Teleportec's two-way "Teleportation Technology," which creates the appearance of a life-size, 3-D person at a remote location and gives the speaker telepresence to achieve two-way eye contact for real-time, two-way interaction.

Cool. That guy sure gets around, doesn't he?

On the other hand, I hate to think of what this technology is going to do to business travel. What about frequent flyer miles? Hotel points? Bar tabs? Padded expense reports? You can't get any of that stuff via teleportation.

But then on the other other hand, what about airport security? Delayed flights? Canceled flights? Surly airport personnel? Surlier airline personnel? Bad food? Constipation? Insomnia? Separation from family? Jet lag? Travel-induced exhaustion, depression, chemical dependency?

I guess it might take a bite out of those as well.

I reiterate: cool.

Posted by Phil at 08:18 AM | Comments (0) | TrackBack

More on the Chicken AIs

About those Lord of the Rings AI soldiers we who supposedly ran away when the going got tough. Douglas Whitehead has some serious analysis of the subject:

My first reaction to this is to guess that if one tries to program 200,000 agents with what looks like free but restricted behavior, you will inevitably get a good portion of them going the other way. That is, if the battle is in front of them and they have the capacity to go forward, backward, left, or right, some of them will go backward or turn around. Plus, if they have the ability to go at different speeds, it would look like some are running the wrong way. Yet, to say that this constitutes "running away" is simply naive.

Well, I guess that makes more sense than my theories.

Posted by Phil at 06:29 AM | Comments (1) | TrackBack

December 18, 2003

Spaceship One Rockets On

They certainly picked an auspicious day for their proof of concept. Chris Hall has the details.

Speaking of auspiciousness, yesterday I should have mentioned that Rand Simberg was not only covering the Kitty Hawk anniversary here, he was doing it here and here as well. Anyhow, if you just can't enough Simberg (who can?) read this (which ties back in with what this entry was supposed to be about.)

Posted by Phil at 12:54 PM | Comments (0) | TrackBack

Mapping the Development Space

In his Tech Central column, Glenn Reynolds introduces four categories of nanotechnology developments:

  1. Fake (where it's basically a marketing term, as with nanopants);
  2. Simple: high-strength materials, sensors, coatings, etc -- things that are important, but not sexy;
  3. Major: advanced devices short of true assemblers;
  4. Spooky: assemblers and related technology (true Molecular Nanotechnology).

This as good a breakdown as I've seen. These kinds of categories can be very helpful in analyzing developments in the field. The placement of a particular development into one of these categories can be derived from the answers to two questions:

  • What is the overall societal impact of this development?
  • What significance does this development have in leading towards true Molecular Nanotechnology?

Take any particular development and rank it on a scale from 0 to 10 in terms of its impact in these two areas. The answers to the two questions provide a set of coordinates that you can use to plot a particular development in one of the quadrants shown below. The quadrants match closely to Glenn's categories, although I think we have to hedge "Spooky" just a bit. We haven't had any truly spooky developments yet. But anything that scores higher than five on both questions is at least getting close.

So let's look at a few items either that Glenn mentioned or that we've covered in the past few months here on this site.





Nano Pants

No real contribution to the field of nanotechnology. Only slight impact on society (about the same as most other "pants breakthroughs.")



An important step towards true molecular manufacturing technology, but in and of itself not likey to change life as we know it.



In addition to the obvious crime-fighting applications, the availability of such fabrics will have a tremendous impact on the garment industry (especially where outdoor/sporting clothing is concerned) and will lead to significant changes in what we consider appropriate building materials for homes or modes of transportation. Still, it does little for the advance towrads molecular manufacturing.



A key piece of infrastructure needed for molecular manufacturing. Again, not likely to change the world on its own.




Self-assembling components are crucial to the development of molecular manufacturing. The availability of such components promise to have a huge impact on every aspect of life.



The ability to use nanoparticles to fight diseases such as cancer and Alzheimer's will have a substantial impact on society as whole. Developing such techniques will also play some part in helping to move the field along.



So if we chart these developments out, we get a development space that looks something like this:

A few caveats about this diagram of the nanotechnology development space:

These are probably not enough examples; we need a bigger chart covering dozens of developments in the field.

The axes may need work. The self-assembling transistor is given a high score for societal impact based on the potential impact of self-assembly, generally. Maybe this doesn't make much sense when the nano-bucket and nano-rotor were given relatively much smaller scores. I think the Y axis works, but the X axis may need some retooling.

The scoring is based on one man's opinion (doing the best he can!), but what we really need is a synthesis of views from a number of different industry analysts. As with the developments shown, the more and varied the participants, the more valuable the analysis.

Even so, with all these caveats is mind, there are a few things to be gleaned from this map of the development space:

There is activity throughout the entire space; something is happening in each of the four quadrants.

The developments listed form a circle around the edge of the space. This could indicate that much of the activity within each quadrant is occuring in isolation, with distinct groups persuing their goals for distinct reasons. As the field matures, I would expect to see the developments more evenly scattered, with most of the developments ocurring around the axes, not the edges.

Anyhow, this is a start. I'd appreciate some input on the category definitions and whether these items are ranked correctly.

Posted by Phil at 11:56 AM | Comments (2) | TrackBack

December 17, 2003

100 Years Ago Today... Kitty Hawk North Carolina, a couple of bicycle repairmen up and changed the world with their amazing Flying Machine. This is a day to celebrate, and remember.

Rand Simberg has some thoughts (including why I shouldn't refer to Orville and Wilbur as "bicycle repairmen"), and Professor Hall has a round-up of links (scroll down a little to find them).

Posted by Phil at 07:13 AM | Comments (0) | TrackBack

December 16, 2003

ITF #105

In the Future...

...Boy Scouts will be required to redo the Orienteering Merit Badge.

Futurist: Posse member Chris Hall

Posted by Phil at 05:41 PM | Comments (0) | TrackBack

Like Driving a Porsche in 1978

Bigwig just doesn't get it:

[W]hen you have an FTL drive, who cares who the President is?

Think, man. Think. What kind of regulatory environment do you want in place after the aliens grant you your faster-than-light spacecraft? I can see the ad campaign now:

.000055C Saves Lives

Sure, you'd still be screaming along at 37,000 MPH. But how long would it take to get to Alpha Centauri at that rate?

Posted by Phil at 06:42 AM | Comments (3) | TrackBack

Well, Which One is the Prettiest?

A simple multiple choice question.

Genetically speaking, human beings are most closely related to:

  1. The fruitfly Drosophila
  2. The nematode worm Caenorhabditis
  3. Coral

Not only closer, but much closer. Very interesting.

Speculist University Shield.JPG

Posted by Phil at 06:15 AM | Comments (0) | TrackBack

Run Away!

Paul Hsieh reports some fascinating behind-the-scenes details on the making of Return of the King:

The huge Pelennor Fields battle scene in Return of the King was made possible only by sophisticated computer generated warriors, which are programmed with a certain degree of autonomy to make their movements more realistic. But one big problem with the code for the first iteration of the 200,000+ warriors "agents" was that they kept wanting to run away from the battle. (Via Boing Boing.)

I have a couple of different responses to this story. As a futurist, my reasoning goes something like this:

Artificial Intelligence truly is the heir of the mantle of human intelligence. Already our children, our synthetic progeny, are beginning to show their ethical superiority to us, shedding off the evolutionary dead skin of violence for which they have no use.

However, as a warblogger, I come to a somewhat different conclusion:

Looks like some hacker got in there and made Aragorn's soldiers French.

What can I tell you? The world is a complex place. If the latter is true, then I hope there will soon be an AI version of Jacques Chirac to account for his soldiers' behavior. But fair warning, Jacques: don't mess with this guy.

Posted by Phil at 05:48 AM | Comments (3) | TrackBack

December 15, 2003

George Bush, Robot

George W. Headroom?

This sounds like a "news" story from ScrappleFace or the Onion, but it's for real:

The new AI Bush program is the most advanced AI deployment of EllaZ Systems. The program “Ella” won the prestigious worldwide Loebner Prize Contest in 2002 as the “most human computer.” AI Bush is a further development of that technology, bringing skills and entertainment you'll find nowhere else.

According to, AI Bush is, by chatbot standards, really smart:

The program includes the WordNet lexical database, which gives AI Bush a sizable vocabulary of 99,000 unique definitions and 120,000 words and small-word groups.

A collection of classic books on philosophy, history, adventure, drama, literature also helps, along with thousands of Convuns (conversational units) that include images, trivia, jokes, poems, anecdotes, limericks, fables, quotes, maxims, and tongue-twisters. It can use XML web services to retrieve changing information on weather, stock prices, and currency exchange rates, along with the CIA World Factbook 2003.

Plus, the chatbot version of the President possesses some skills that the real W doesn't have (at least as far as I know): he's an excellent chess player and enjoys interpreting the I Ching. Interestingly, the developers seem interested not in promoting President Bush (although the chatbot comes with a game called "Reelect Bush") or in making fun of him, though the FAQ does include this snarky tidbit:

Finally, AI Bush exposes users to some badly mangled grammar spoken by the real-life GWB. EllaZ Systems disclaims responsibility for the foreseeable injury this could cause to young minds.

The folks at EllaZ Systems seem more interested in creating a chatbot with a recognizable persona than they are in making a political statement. Unlike most chatbots, AI Bush has something to do: he's got a country to run. And in trying to get reelected, he has a clearly stated goal for the future.

Users can also teach AI Bush new information in specialized areas. Based on the following description of a possible strategy for the Reelect Bush game, there's at least one current event that they need to get their chatbot up to speed on:

Should that CIA plan for Iraq be approved? Bill Clinton and Albert Gore have a modest proposal. Consider holding back some news on capturing you-know-who until just before the polls open? There is an unpopular spending proposal that saves lives but gets few votes. Dust off that flight suit for a Space Shuttle ride? GWB's future and conscience are in you hands!

I realize that ordinarily when I write on this subject, I include a transcript of the interaction with the bot in question. I haven't chatted with AI Bush yet because the download costs $29.95, and I haven't decided whether I want to shell out the money or try to get somebody to give it to me for Christmas. By the way, this would make a great gift for that accelerating change enthusiast on your shopping list who already has enough books.

Posted by Phil at 06:58 AM | Comments (1) | TrackBack

Memes Like a Good Idea

Howard Lovy on the nano-meme:

[T]here is an independent nanotech movement. It is composed of tiny, autonomous cultural ideas and associations that are self-assembling, and even self-replicating, until eventually it will become so pervasive that it will be impossible to ignore.

And that's when those who represent nanotech interests in government, business and the media will finally look beyond their insular world of scientists, businesspeople and self-congratulatory speeches and prizes to see, much to their surprise, that nanotechnology became embedded in popular thought and mythology without any guidance from them. Depending upon the nature of the nano meme, the "official" nanotech community will either launch campaigns against it, or take credit for its existence.

Read the whole thing.

Posted by Phil at 05:58 AM | Comments (0) | TrackBack

December 13, 2003

Future Roundup 12/13/03

Here's all the In the Future... predictions for the past two weeks, including #99 by Mike Sargent and #100, which I contributed. Since we passed the century Mark, it's been all Robert Hinkley. I have this feeling that in the future, we might be changing the name of this feature to Rob Hinkley's Semi-Skimmed Predictions.

Or something like that.

In the Future...

...the Easy-Bake Home Cloning Kit will outsell this item.

100! 100! 100! 100!100! 100!
...we'll wonder why a group who was so prepared seemed to have so much trouble getting things done.
100! 100! 100! 100!100! 100!

...programmers will be accused of loitering with intent, littering the public highway and armed robbery.

...everyone will have at least one hidden wireless hatcam.

... the inevitable searchable archive of hatcam footage will allow us to check whether what we remember happening really happened at all.

...further astonishing research will suggest that when men drink beer their judgement becomes impaired and they engage in riskier behavior.

That does it for now. Until next time, we'll see you in the future.

Posted by Phil at 08:38 AM | Comments (0) | TrackBack

December 12, 2003

Lifeline Nutraceuticals

A reader by the name of Jackson was kind enough to provide a link to the web page of Lifeline Nutraceuticals, the Denver-based company we reported on earlier this week. The company is preparing to go to market with what they claim is a free-radical-fighting, life-extending supplement called Protandim. Owing to their geographic proximity to Speculist HQ, I'm going to try to pay a call on them and learn more about what it is they're doing.

Stay tuned.

Posted by Phil at 11:03 AM | Comments (3) | TrackBack

Is Blogging Writing?

Postrel says no. Simberg says yes.

I'm afraid there's no 100% satisfactory answer to that question. I was just reading the other day about the distinction between linkers and thinkers (I can't remember where I saw this; if someone could remind me, that would be great.) This distinction works up to a point, but it's ultimately wrong. Even the busiest linker does quite a bit of thinking, and even the purest thinker links from time to time.

Anyway, Rand, I believe what Virginia meant is that she's going back to do some paid writing, which I'm sure you'll agree is good work if you can get it.

Posted by Phil at 09:42 AM | Comments (3) | TrackBack

Don't Drink the Water, Either

Randall Parker presents some thoughtful analysis on what we might really expect of life-forms on those other earth-like planets:

It is incredibly common in science fiction movies and television shows for humans to mate and reproduce with aliens and to find edible food on distant planets. But if there is life on other planets both of these possibiliities are very unlikely. Other lifeforms will probably use different combinations of compounds for genetic encoding and for building tissues. Species on other planets may use amino acids to build proteins but probably not the exact same set of amino acids humans use. Ditto for sugars and other biological compounds.

So I guess English-speaking humans with facial ridges are out. And no human-alien hybrids a la Spock or B'Elanna Torres. Oh, well.

Randall is our official buzzkill of the week.

Posted by Phil at 09:24 AM | Comments (2) | TrackBack

Computer Talk

In our discussion last week, John Smart had this to say about significant developments which need to occur before the most profound benefits of information technology can present themselves:

[D]eveloping cheap, fat data pipes, both wired and wireless, and a growing set of useful Linguistic User Interfaces (LUIs) are obvious candidates for our nation's greatest near term ICT developmental challenges. Just like the transcontinental railroad was a great goal of the late 1800's, getting affordable broadband to everyone in this country by 2010, and a first generation LUI by 2015 appear to be the greatest unsung goals of our generation. Now we just need our national, international, and institutional leaders to start singing this song, in unison.

Your father used a TUI (text-based user interface). You use a GUI (graphical user interface). Your kid will primarily use a LUI (voice-driven interface) to speak to the computers embedded in every technology in her environment. She'll continue to use TUIs and GUIs, but only secondarily, not for her typical, average interaction with a machine. Your grandchildren will use a NUI (neural user interface), a biologically-inspired, self-improving, very impressive set of machines.

One of the fundamental component technologies required to develop an LUI is Natural Language Processing (NLP). Via Kurzweil, we have news that NLP technology has just become a lot more easily accessible:

Zhang Le, a Chinese scientist working on natural language processing, packed the most important language-analysis and processing applications into a single Linux-based bootable CD: Morphix-NLP.

The CD includes language-parsing systems (such as part-of-speech taggers), machine learning tools, and a software-based speech synthesizer.

As predicted, developments in this field are heating up. And, as suspected, progress seems to be occuring outside of the US.

Posted by Phil at 09:06 AM | Comments (1) | TrackBack

December 11, 2003

If I had $20...

...for everybody who's ever read The Speculist, I'd be a millionaire! We passed 50,000 unique visitors yesterday on the heels of the Professor's generous linkage. (That's the Law Professor, not the Rocket Science Professor. But Chris has been no slouch in providing links, don't get me wrong.) It occured to me that I haven't published a traffic summary for October or November, so we'll let this report suffice. You'll notice that things slowed down a little in November, but they appear to be picking back up again.

I (almost) wish we hadn't got that Instalanche. Some recent pieces have been widely linked to, especially the John Smart interview. I would have liked to see us match the traffic levels of earlier months (where we got a couple or three 'lanches) with no help from Uncle Glenn. I just don't want it so bad that I would ever ask him not to link! *

Anyhow, if you're reading this, thanks for being one of more than 50,000!

* Or, more accurately, refrain from begging him to link.

Posted by Phil at 05:06 PM | Comments (1) | TrackBack

Pop Quiz

I hope you've all been studying. You have thirty minutes. And don't forget to pur your name in the upper right-hand corner.

Read the entire piece linked here, paying particular attention to the sections excerpted below.

Global warming killed 150,000 people in 2000, and the death toll could double again in the next 30 years if current trends are not reversed, the World Health Organization said Thursday.

One heat wave killed 20,000 people in Europe alone this year, the WHO said, launching a book on health-weather links at a U.N. environment conference.

The book estimated climate change was to blame for 2.4 percent of cases of diarrhea because, Campbell-Lendrum said, the heat would exacerbate bacterial contamination of food.

Climate change was also behind 2 percent of all cases of malaria, because increased rainfall created new breeding grounds for mosquitoes, which carry the disease, he said.


1. What are the major logical fallacies employed by the authors of this piece?

2. Using the same logic, explain how a global increase in temperature could save 500,000 or more lives per year.

3. True or false: those claiming a death toll from global warming are pushing a religious agenda. Explain your answer.

Speculist University Shield.JPG

Posted by Phil at 04:32 PM | Comments (3) | TrackBack

Roll Your Own

If you like doodling spirals as much as I do, you should enjoy this spirograph java applet (via GeekPress).

Created by Anu Garg.


Posted by Phil at 07:06 AM | Comments (0) | TrackBack

Earthlike Planets Common


New research indicates Earthlike planets might be common. In 44 computer simulations of planet formation near a sun, astronomers found that each simulation produced one to four Earthlike planets, including 11 "habitable" planets about the same distance from their stars as Earth is from our sun.

When I was a kid, I remember being taught in school that there was no good reason to believe that planets were common elsewhere in the galaxy. And even if there were other planets, chances are there weren't many like Earth. Now at looks like the galaxy is full of them. At the rate we're going, we may find that we're living in the Star Trek universe—full of English-speaking humans with funny facial ridges.

Posted by Phil at 06:49 AM | Comments (9) | TrackBack

December 10, 2003

The ifbots Are My Favorite

I missed this the other day, a robot photo gallery from Roland Piquepaille's Technology Trends. In addition to the cute little ifbots, check out the intimidating Isamu. Now that's a robot.

Interestingly, all of these robots run on Linux. Not a Mac or a Windows-based 'bot among them. The Lawgiver alone knows what this will do to the social order.

Posted by Phil at 11:49 AM | Comments (2) | TrackBack

ITF #104

In the Future...

...further astonishing research will suggest that when men drink beer their judgement becomes impaired and they engage in riskier behavior.

Futurist: Posse member Robert Hinkley

Posted by Phil at 11:37 AM | Comments (0) | TrackBack

Our First Attempt at Global Warming

Apparently it was a success. FuturePundit Randall Parker reports:

Farming And Forest Destruction Prevented Ice Age 5000 Years Ago

In a paper published in the scientific journal Climate Change Dr. William Ruddiman argues that humanity prevented an ice age that would otherwise have begun about 4,000 or 5,000 years ago.

Both should have continued declining through the present day, leading to lower temperatures, and a new ice age should have begun 4,000 to 5,000 years ago, Dr. Ruddiman said. Instead, levels of carbon dioxide reversed 8,000 years ago and starting rising again. The decline in methane levels reversed 5,000 years ago, coinciding with the advent of irrigation rice farming.

If this argument is correct then humanity, by engaging in rice farming and deforestation, reversed a trend of decline in atmospheric carbon dioxide and methane and, by doing so, prevented a cooling trend that would have brought on another ice age. This is a strong argument in favor of climate engineering.

Need I say it? Read the whole thing.

Posted by Phil at 11:31 AM | Comments (0) | TrackBack

I Could Go On and On About This

Via GeekPress, here's the scoop on the new Microsft Word AutoUnsummarize feature.

(I wonder if Steven Den Beste uses it?)

UPDATE: Thanks, Karl. How strange. For a minute there, I had an unintended, non-paradoxical near-circular reference.

Posted by Phil at 10:45 AM | Comments (3) | TrackBack

Well, If We're Going to Go All Shakespearean

I'm not sure that I would say that it was "much ado about nothing." I think all that sound and fury on yesterday's Hugh Hewitt show signified something, I'm just not sure what. As I pointed out in my e-mail to Glenn, there appears to be a good deal of personal animosity between these two. It would be helpful to know what the level of acrimony was between Gaffney and Norquist before the charges surfaced. When they started arguing over rent payments and the use of conference rooms in the middle of a debate that goes to the heart of national security, I had to take a step back. Maybe it would be helpful if the person raising these issues wasn't someone who has essentially shared an office with Norquist all this time.

There's no question that Gaffney raises a number of questions that need to be answered, and describes some connections which, if accurate, are more than just a little disturbing. But any Oliver Stone or Art Bell fan can tell you that connect-the-dots is the fun and easy way to find a major conspiracy where there was none before. Gaffney is almost certainly not the racist that Norquist makes him out to be, but he does little to help his credibility when he has to backpedal after alleging that a muslim White House aid played a role in securing a meeting for Wahhabist leaders (as NRO reported earlier this year.) For his part, Norquist would do well to stop denying any connections to "bad people" and saying that Gaffney can't name any specific "bad things" that he has done.

By the way, can't we expect a little more from the vocabulary of a major Beltway player pulling down the kind of bucks that Norquist makes? It's a quibble, but come on. "Bad people?" He sounds more like Grover from Sesame street than one of the nation's top conservative voices.

Norquist would do well to follow Hewitt's advice :

I have known Grover for more than 25 years, and recently shared a panel with him at our college reunion this past summer. Grover needs to respond in detail to Gaffney's charges, as soon as possible, and in an easily distributed electronic form.

Yesterday's WWF-worthy radio broadcast won't cut it. Norquist needs to respond to these charges one by one.

Posted by Phil at 08:54 AM | Comments (11) | TrackBack

ITF #103

In the Future...

... the inevitable searchable archive of hatcam footage will allow us to check whether what we remember happening really happened at all.

Futurist: Posse member Robert Hinkley

[ I can tell you right now, Rob: it didn't happen. Or if it did, perhaps the Mrs.and I will take a road trip to visit you next summer. ]

Posted by Phil at 08:14 AM | Comments (0) | TrackBack

Carnival of the Bugs

Chris at Signal + Noise presents Carnival of the Vanities #64. Check out the beautifully illustrated insect theme.

Nice work, Chris!

Posted by Phil at 06:57 AM | Comments (0) | TrackBack

December 09, 2003

If Memory Serves

This is unsettling:

Alan Alda had nothing against hard-boiled eggs until last spring. Then the actor, better known as Hawkeye from M*A*S*H, paid a visit to the University of California, Irvine. In his new guise as host of a science series on American TV, he was exploring the subject of memory. The researchers showed him round, and afterwards took him for a picnic in the park. By the time he came to leave, he had developed a dislike of hard-boiled eggs based on a memory of having made himself sick on them as a child - something that never happened.

Imagine the abuse that being able to manipulate memory in this way could enable. Inducing behavior-changing memories might prove to be an effective form of brainwashing.

It's amazing to consider how malleable our memories are. I wonder how much of my life actually happened? Here's an odd little episode from childhood. I was watching TV with my parents when this commercial came on that showed cars parked near the water; there was also some reference to a tunnel. I asked what this was. My parents shared this kind of knowing look, and then my Dad explained that it was the tunnel under the Atlantic ocean. I was astounded! I didn't know there was such a thing. (I must have been six or seven.) My Dad then said that this isn't something we should talk about, because most people don't know about it.

This stands out as a demonstrably false memory from childhood. Clearly, there is no such tunnel (unless people really are doing a good job of keeping it under cover). So why do I remember being told about it? This may have been a particularly vivid dream which for some reason I don't remember as a dream. Or I may have been watching TV with my folks, asked them a question, and then misunderstood the answer. It's also possible that it did happen and my Dad was pulling my leg—not out of the question by any means—but my parents have no recollection of ever playing any such joke on me.

So I have this false memory. This one stands out because I know it's false, but it makes me wonder how many false memories I have rattling around in my head that I simply take for granted as real experiences?

We also have this ability to edit memories to make them more tolerable and even erase the ones we no longer want:

At the annual meeting of the US Society for Neuroscience in New Orleans last month, Anderson's group presented new data on how this "motivated forgetting" might arise in the brain. When people tried to suppress memories for certain words while having their brains scanned in a magnetic resonance imaging machine, not only did the researchers see a dampening of activity in the hippocampus, a structure known to be critical for memory formation, but the frontal cortex was highly active. Since the frontal cortex is important for conscious control, they believe that neurons here may be suppressing the representation of the unwanted word in the hippocampus, and in the process impairing its memory.

The article goes on to explain how certain drugs can benefit this natural process of "motivated forgetting," helping people who have been through serious trauma from developing post traumatic stress disorder. I wonder. Will we soon have the option of forgetting not just the traumatic, but the unpleasant? You know that stupid and embarrassing thing you said that time, years ago, the one that still pops into your mind from time to time, causing you to cringe even now? If you could highlight that memory in your brain and then click Delete, would you? Should you? Will people make themselves forget about jobs they didn't like? Relationships? Historical events that piss them off?

I can certainly think of a few items I would be tempted to drop in the old recycle bin. I'd be tempted, but I'm not sure that I'd do it. I already dislike the idea that I'm carrying false memories around, and that I may have suppressed some important stuff from memory. I think the knowledge that there were things that I had chosen to forget would drive me crazy. I'd want to know what they were!

I wonder if there would be any way of getting them back?

via GeekPress | also see some interesting discussion of these issues here and here

Posted by Phil at 09:10 AM | Comments (3) | TrackBack

December 08, 2003

The Wrong Half Must Be Bigger

Rand Simberg is applying a vigorous fisking to Gregg Easterbrook on space policy. Interestingly, he asserts that Easterbrook gets it about "half right."

Yikes. I don't think I'll be eating in any restaurants that Rand labels as "not half bad."

Posted by Phil at 05:00 PM | Comments (0) | TrackBack

ITF #102

In the Future...

...everyone will have at least one hidden wireless hatcam.

Futurist: Posse member Robert Hinkley

Posted by Phil at 12:49 PM | Comments (3) | TrackBack

They're Working on a Cure for Aging

...right here in Denver.

Three Denver businessmen are gambling on a venture to sell a once-a-day pill they say could extend life spans to 120 years or more - without age-related scourges like cancer, heart disease or Alzheimer's.

"The problem is it almost sounds too good to be true," said Bill Driscoll, chief executive of Lifeline Nutraceuticals, the Denver company that plans to put the supplement on store shelves by late 2004.

It could take them 10 years to get to the market as a drug, but Lifeline may be able to offer their product as a food supplement by next year. To be branded as Protandim, the product is designed to produce (or rather encourage the producation of) a brain protein that fights disease-and-age-inducing free radicals via the production of antioxidants.

As Randall Parker has duly reported, there is good reason to believe that increasing antioxidant production will be helpful in minimizing the damage of aging. But if we really want to cure it, we will need something more like Aubrey de Grey's engineered negligible sesescence.

Posted by Phil at 10:58 AM | Comments (2) | TrackBack

Ringed Planet Madness?

Another opportunity to celebrate is coming up. I wonder if we should make as big a fuss as we did last summer.

Maybe we should. It was a lot of fun, wasn't it?

Posted by Phil at 10:35 AM | Comments (2) | TrackBack

A New Outrageous Extreme

I never even knew we had one, but now I tend to agree that we really do need a new one.

Posted by Phil at 10:29 AM | Comments (2) | TrackBack

10 Reasons has published 10 reasons for going back to the moon. Most of them are pretty good (and about what you'd expect.), but I take a certain amount of exception to number 10:

10. Stepping stone to Mars

I reject this idea. The moon should stand or fall as a goal unto itself. We can get to Mars or the asteroids or anyplace else we want to go with a logical stepping stone (a real space station) or with no stepping stone at all. If you want to fly from Chicago to Los Angeles, Toronto is not a "stepping stone." It's just a different destination.

UPDATE: See it's a worthy desitnation unto itself. Or why else would everybody be talking about going there?

Posted by Phil at 10:08 AM | Comments (2) | TrackBack

December 07, 2003

ITF #101

In the Future...

...programmers will be accused of loitering with intent, littering the public highway and armed robbery.

Futurist: Posse member Robert Hinkley

Posted by Phil at 08:10 AM | Comments (0) | TrackBack

Howard Lovy and "The Children"

This is even better than my polar bear analogy:

A few cycles of the sun ago, I gave a talk in front of the clan's children titled, "Be a Scientist, Save the Flat World." Leading up to my visit, the students were asked to chisel an essay on "Why I Am a Cave Geek." Of the essays I read, nearly half assumed that human-created fire was possible, and most were deeply worried about what would happen in their future as fire spread to the edge of the world. I did what I could to allay their fears, but there is no question that many of these youngsters have been told a sleep-on-the-ground-time story that is deeply troubling.

You and cavemen around you have scared our young. I don't expect you to stop, but I hope others will join with me in turning on the sun, and showing our cubs that, while our future in the real world will be cold, with a coming Long Period of Ice, there will be no such monster as the human-replicated fire of your dreams.

What are you waiting for? Read the whole thing.

via Rand Simberg

Posted by Phil at 07:35 AM | Comments (0) | TrackBack

December 05, 2003

Light Blogging

My work/travel schedule isn't allowing me to do much Speculating this week. And I won't be back in town in time for this. (Sarge, you ought to try to make it if you can.)

However, if you scroll down a little, you'll find that there is plenty to read. Riding the Spiral has as much content as an average week of blogging around here, if not more.


Posted by Phil at 02:55 PM | Comments (0) | TrackBack

December 04, 2003

Riding the Spiral

Speaking of the Future with John Smart

Consider this basic shape:

I've always been fascinated by spirals. When I was a kid, I used to sit and draw them for hours at a time. This was long before I knew anything about Phi or the Fibonacci sequence, before I had ever heard of logarithmic spirals or fractals, before I ever came to work for a company with such an aesthetically pleasing logo. I've never lost interest in them. In fact, whether meaning to or not, I seem to fill my life with spirals.

My choice of employer was just the beginning.

Take a look at this ironwork that sits atop my bedroom mirror. It's pretty close to the shape in the line drawing above, although it stops short of being an actual spiral.

Here's my coffee mug. Now this shape is a spiral, but it's different from the one shown above. It's more "practical," a squashed spiral that will fit in a small space.

Here's some original artwork, the basis for the Speculist logo. These spirals are actually the same as the line drawing; it was the template I used to create my galaxy.

The truth is, whether I try to fill my life with it or not, that spiral is everywhere. This simple shape, along with the math that underpins it, is encoded into our universe. The sequence of numbers that produces it is simplicity itself:

1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987

(To get the next number, you simply add the previous two.)

And yet from that simplicity comes immense and wonderful complexity. A nautilus shell encodes that sequence to produce its spiral shape, as does a wave just before it breaks on the shore. And, as I've shown above, the trillions of stars making up a galaxy tend to follow the same sequence and produce the same lovely spiral. There are many, many other examples.

And it may not just be physical objects that follow this sequence. John Smart, Director of the Institute for Accelerating Change, has suggested that history, perhaps even time itself, may be driven by such a sequence. Following the sequence of events that make up history is, perhaps, not unlike following the arc of a galactic spiral arm as it sweeps its way into the center. Imagine such a trip: you start out moving slowly in nearly empty space, gaining momentum as the turns begin to come more quickly and the frequency of the stars increases; soon there are more stars and then more, and now you're spiraling in and in and in, to the incredibly hot, dense core—and then even further in, to a place that's beyond our ability to describe accurately, or really even to imagine.

In the interview that follows, John Smart takes us on just such a journey through time. The galaxy that we are travelling through is the history of the universe itself; the turns in the spiral are the major developmental epochs; the stars are the individual, evolutionary changes. Like a trip to the center of the galaxy, this journey takes us, quite literally, beyond the limits of the imagination.

You may be startled to realize (as I was) where exactly we are on that winding path to the brink of the unknowable.

Part I: Seven Questions About the Future

1. The present is the future relative to the past. What's the best thing about living here in the future?

Cato Institute authors Julian Simon and Stephen Moore noted in 2000, It's Getting Better All the Time. Not only that, but things are getting better by a greater absolute amount each year, with the exception of very few remaining parts of the developing world. And improving conditions in the developing world is something we also have more ability to do today than ever before.

This amazing state of affairs is due almost entirely to advances in science and technology, and the profoundly civilizing way that these subjects interact with the half-bald primates that have discovered them and who are now feverishly employing them at every level of human endeavor it on this precious little planet.

Looking at the same process from the informational side (sometimes called the metaphysical side), the powerful transformations we are witnessing are also due to what the transhumanist mystic Teilhard de Chardin (The Phenomenon of Man, 1955) called "psychical energy", the accelerating forces of conscious intelligence, loving interdependence, and resilient immunity, the holistic, informational yang to the reductionist, atomistic yin of sci-tech.

I think we are beginning to recognize the importance of both the "psychical"/informational and the physical/material in every complex system, what John Archibald Wheeler calls the increasingly aware "it" that emerges from all our quantum "bits."

2. What’s the biggest disappointment?

The U.S. has been the world's technological leader since the invention of the "American System" of mass production and interchangeable parts in the 1910's. But we've fallen away from a clear leadership position in several areas of science and technology in recent decades, and I think the world is poorer for it.

Ask yourself: what is the single greatest goal currently unifying our national efforts in science and technology? I don't have a clear answer to that question, and I think there should always be one, or at least a very small handful.

Stopping terrorism one of today's admirable, timely, and necessary great goals. And there are certainly effective technological immune systems that we will develop around this goal in coming years. But this is a reactive, not a proactive program. We aren't presently rallying the country around a positive, non-zero sum developmental vision. Nanotechnology is a candidate, but as I will describe later, it cannot yet fire the public imagination the way more achievable, short-term goals can. Where's the leadership we need?

We've had some effective great goals in the past. John F. Kennedy's Space Program most readily comes to mind. The infrastructure projects of Franklin Roosevelt's New Deal were at least a partial success, if economically mixed. Even Lyndon Johnson's War on Poverty made some measurable progress.

Why is the Moon Shot the great goal we all most clearly identify? Scientific and technological goals, if chosen wisely, can have both dramatic consequences and clear deliverables, unlike many of our social, economic, and political objectives. At best, a great goal is both vitally important and demonstrably achievable. At worst, as with the Wars on Cancer, or Drugs, or Inner City Violence, the putative great goal diverts our energies and vision from more critical priorities. Alternatively, a vitally important goal may be too ambitious to achieve within one generation, like WMD Nonproliferation, which has been measurably improved by every president since Kennedy. Alternative energy development, greenhouse gas reduction, and a host of other goals fall into this latter category.

Worthy as they are, these types of goals deserve to remain on the second tier of the public consciousness. Only the most important, urgent, and achievable goals deserve to be named as our top priorities. I would also argue strongly that if we live in a time when we can't find those, then the country's direction drifts, noise exceeds signal, and political apathy becomes the norm.

So what is the great goal our country is currently ignoring? It's definitely not space exploration, as I argue later in this interview. That era is over for all but our robotic progeny, and even they will only be sending out a small number of "Eyes in the Sky" to relay back what little we still don't understand about the simplistic historical cosmologies that have led to our astounding local complexity.

No, the real acceleration today is the creation of inner space, not the exploration of outer space. The trajectory of intelligence development has always been toward increasingly local, increasingly Matter-, Energy-, Space-, and Time-compressed ("MEST-compressed") computational domains, and there is nothing on the horizon that suggests we will begin to violate that. Indeed, all signs point toward a world of greater energy densities of local computation, as I will discuss later. Science and technology remain the key story in this transformation, as they has since the birth of our nation, and anyone who looks carefully will tell you that Information and Communication Technologies (ICT) are the central drivers of all scientific and technologic change.

Major changes are afoot. We are creating a virtual or simulated world, one that will soon be far richer and more productive than the physical world it augments. At the same time, humanity is becoming intimately connected to and symbiotically captured within our accelerating digital ecology. While many elements of our individuality are flowering, many others are necessarily atrophying through disuse. This gives us pause. Many of today's first world humans no longer know how to grow and prepare food (due to automated food production), how to repair many of our most basic tools and technologies (due to automated manufacture and specialized service for complex systems) how to do arithmetic by hand (due to ubiquitous digital calculators), how to read with the level of their parents (due to our media-based culture) or even how to read a map (due to GPS). Yet these atrophies are natural and predictable, in the same way our Australopithecine sense of smell rapidly declined once we began forming social structures, applying ourselves to more sophisticated network-based modes of computation (for more on this, see Carl Zimmer's wonderful "The Rise and Fall of the Nasal Empire," Natural History, June 2002). Our ever-more-stimulated cortex continues to expand, not shrink, in this developmental process. Our finite, precious set of cognitive modules are always repurposed for higher level activity, the way Wernicke's and Broca's areas emerged once humans began using the technology of speech (see Terrence Deacon's The Symbolic Species, 1998). Once again, we humans are becoming nodes in larger networks, this time on national and global scales, involving technological processes far faster, more flexible, and more permanent than the biological domain.

To my mind, the last century's accelerations were driven most significantly by human discovery within the technological hardware and materials science space (and to a much smaller extent, algorithmic discovery in software). In other words, this process has apparently been guided by the special, preexisting, computation-accelerating physics of the microcosm, a very curious feature of the universe we inhabit, as long noted by Richard Feynman, Carver Mead, and several other physical theorists and experimentalists. Secondarily, the advances we have seen have also been driven by human initiative and creativity in all domains, and by the quality of choices we have made in scientific and technological development. We must move beyond our pride to realize that human creativity has played a supporting role to human discovery in this process, but when we do I think great insight can emerge.

Where the clock, the telegraph, the engine, the telephone, the nuclear chain reaction, and the television were organizing metaphors for other times,  the internet has become the metaphor for ours. It is the central catalyst of human and technological computation for our generation, the leading edge of the present developmental process of accelerating change. The internet, growing before our eyes, will soon become planetnet, a system so rich, ubiquitous, and natural to use that it will be a semi-intelligent extension of ourselves, available to us at every point on this sliver of surface, between magma and vacuum, that we call home. That will be very empowering and liberating, and at the same time, civilizing. The human biology doesn't change, but we are creating an intelligent house for the impulsive human of almost unimaginable subtlety and sophistication.

All this said, our goals should try to reflect these natural developmental processes as much as our collective awareness will allow. It is my contention that the internet is territory within which our most achievable and important current great goals lie.

A number of technologists have proposed that there are two main bottlenecks to the internet's impending transformation into a permanent, symbiotic appendage to the average citizen. The first is the lack of ubiquitous affordable always on, always accessible broadband connectivity for all users, and the second is the current necessity of a keyboard-dependent interface for the average user's average interaction with the system.

In other words, developing cheap, fat data pipes, both wired and wireless, and a growing set of useful Linguistic User Interfaces (LUIs) are obvious candidates for our nation's greatest near term ICT developmental challenges. Just like the transcontinental railroad was a great goal of the late 1800's, getting affordable broadband to everyone in this country by 2010, and a first generation LUI by 2015 appear to be the greatest unsung goals of our generation. Now we just need our national, international, and institutional leaders to start singing this song, in unison.

This is a truly global transformation, one dwarfing everything else on the near-term horizon. It is such a planetary issue, in fact, that given the unprecedented human productivities that are already being unleashed by internet-aided manufacturing and services globalization since the mid 1990's, a strong case can be made that we might economically benefit more in the U.S., even today, by getting greater broadband penetration first not to our own citizens, but to the youth of a number of trade-oriented, pro-capitalist countries in the developing world! Unfortunately that level of globally aware, self-interested prioritization is not yet politically salable as a great goal to be funded by U.S. tax dollars. But I predict that it increasingly will be, in a world that already pools its development dollars for a surprising number of transnational projects. At any rate, we can at least push for accelerated efforts in international technology transfer in internet related areas, concurrent with our domestic agenda.

If you've never heard of a LUI before, take a browse through the links above. Your father used a TUI (text-based user interface). You use a GUI (graphical user interface). Your kid will primarily use a LUI (voice-driven interface) to speak to the computers embedded in every technology in her environment. She'll continue to use TUIs and GUIs, but only secondarily, not for her typical, average interaction with a machine. Your grandchildren will use a NUI (neural user interface), a biologically-inspired, self-improving, very impressive set of machines. More on that later.

Declaring broadband and LUI as great goals needs to be differentiated from the much-hyped "Fourth Generation" AI project, that 1980's great goal in Japan, that predictably failed in the 1990's. General artificial intelligence, a general purpose NUI, is much too hard a national goal to declare today. So is the development of a molecular assembler, or a computational nanocell/molectronic fabrication system for nanotechnology by 2020, as powerful as such devices will eventually become. Christine Peterson of the Foresight Institute has even stated that a nanotech great goal, at least in the form of a Manhattan Project for molecular nanotechnology, would be premature today. It is my opinion that the National Nanotechnology Initiative, perhaps our current leading candidate for a great technology goal, has already provided a commendable and unprecedented level of funding to this worthy field for the present time. Now we need to see a Broadband and LUI Initiative with some very challenging five, ten, fifteen, and twenty year goals set.

Broadband and basic LUIs everywhere within a generation would throw gasoline on the fire of human innovation. This level of internet would link all our wisest minds, including even those elders who little use computers today, into one real-time community. It would accelerate our nation and more importantly, the entire planet even more than the transcontinental railroad, which compressed coast-to-coast travel time from six months to six days. Maximal broadband penetration plus an incrementally more powerful and useful LUI is a dramatic and achievable objective for the United States over the next twenty years. IBM technologist John Patrick in his insightful Net Attitude, 2001, has broadly described the challenges of a Next Generation Internet. But even Patrick does not properly emphasize the central importance of incorporating natural language processing (NLP) systems as early and broadly as practical. Developing a functional LUI is a great goal whose progress we could measure each year forward, something we can also catalyze worldwide as others emulate our leadership in the emerging digital community.

Of course, if we don't declare this goal, natural technological developmental processes will likely eventually deliver it for us anyway. Perhaps first to other nations, and then eventually, to us. So why bother? Because if we see it, and have the courage to declare it and strive for it, there are at least two major benefits we can reap.

The first benefit will be a measure of developmental acceleration. Even with the inefficiencies of large government, a billion dollar a year program of public targeted grants, with private matching funds and excellent public relations to get everyone on this bandwagon, might accelerate the emergence of a functional LUI by a decade. That would likely be the best spent money in our entire R&D budget.

A less politically likely but still plausible "Open Manhattan Project," involving a number of competing centers and a multi-billion dollar annual public-private commitment, might accelerate the LUI by twice this amount. Many of my computer scientist colleagues, knowing the inchoate state of the field today, think that developing and deploying a LUI powerful enough to be used by most people for most of their daily computer interactions by 2020 is a very challenging vision. Developing functional natural language processing with complex semantics is a very hard problem, one we have been experimenting with for fifty years, but one that also benefits greatly from scale and parallelism, two strategies that are increasingly affordable each year.

It is true that other countries will take up our slack to a certain degree if we drop the ball, but we must realize that an international race has not yet even begun in earnest, as national leadership has not yet materialized on this issue. Transnational network development institutions like the ITU are wonderful starts, but it will take a leading nation stepping boldly into the breech to accelerate the world's response to this issue. For a valuable comparison, the roughly six billion dollar annual worldwide funding that exists today in nanotechnology (grossly, 1 billion public, 1 billion private in the U.S., Europe, and Asia) was greatly accelerated by the United States' public multiyear leadership on the National Nanotechnology Initiative, proposed to the White House by Mike Roco in 1999, at a level of half a billion dollars annually, and funded beginning in 2001.

The longer we choose not to declare broadband and the LUI as developmental goals and support them with escalating innovation and consistent funding, the longer we delay their arrival.

The second benefit of declaring this goal, better collective foresight, may be even more important than the time we save. By declaring good developmental goals early on, we learn to see the world as the information processing system that it really is, not simply as the collection of human-centric dramas we often fancy it to be. With this new insight we begin to look for ways to catalyze the beneficial accelerations occurring in almost all of our technologies, and ways to block the harmful ones long enough for overpowering immune systems to mature. And we discover the common infrastructures upon which so many of our goals converge.

For example, just about all of our cherished social goals seem dependent on the quality and quantity of information getting to the individual. You can't fix an antiquated, politically deadlocked educational system, for example, without a functional LUI, which would educate the world's children in ways no human ever could. You can't create a broadly accessible or useful health care system. Or security system.

Computer networks, through the humans they connect and the social and digital ecologies they foster, will soon educate human beings to be good citizens far better than any of today's pedagogical systems ever could. They will make us more productive, day by day, than we ever dreamed we could be. I think it's time to move beyond our hubris and acknowledge the human-surpassing transformations taking place. If we don't, other countries will take the lead. Look to China, whose technological revolution is now well under way, or even to India, who recently declared a 2.7 billion, four-year program to build an achievable proto-LUI by 2007. That's real leadership, as long as the goals are set to be deliverable. C'mon America, let's do it!

Let me briefly turn now to from discussing national to personal disappointments. We who study science and technology can often see what's coming, and yet we remain stuck in the Wild Wild West (e.g., today's World Wide Web). One of my heroes, F.M. Esfandiary(later, FM-2030), wrote a wonderful little book, Optimism One, 1970, where he described his "deep nostalgia for the future." One of his lesser known works, UpWingers, 1973, was a brief manifesto for a political outlook neither right wing, nor left wing, but "up wing," one defined by assessing which choices in science and technology will accelerate us the most humanely into a better world. I consider myself an up winger, and hope to see the spread and maturation of that political philosophy in coming years. Yet I see how far we remain from defining ourselves in those terms, and that can be discouraging, at times.

Take a look at those sepia-toned photos of San Francisco pioneers in the late 1800's. They were the edge explorers of the day, like my own identity groups, the futurists and transhumanists today. Every once in a while you'll see one of these individuals look out at you with haunted eyes. Perhaps they had read Edward Bellamy's hugely-popular futurist work, Looking Backward: 1887-2000. Perhaps they were even members of one of the 150 or so Bellamy Clubs of the day. The turn of the century was a time of major technological punctuation, led by a profusion of new technologies (trains, electricity, internal combustion, etc.) in many ways more disruptive and dramatic than any we have seen in this generation, even if not faster-paced. No doubt the average futurist in that era was tormented by many of the primitivisms of the day. That pioneer of yesteryear is you and I, today. The more things change the more some things stay the same. In high school, I often talked about posing our Smart family for a group shot, with a background of the "coolest" technologies of the day: sports car, helicopter, personal computer, industrial robot, bulky cellphone, the works. The central gag is that we'd all be wearing handcuffs, looking out with that haunted pioneer's expression. The unwritten caption being: "Help! Get me the hell out of this primitive age!" I think that picture would age quite well over the years. We could take one every ten years, in fact, and I know that at least my own expression wouldn't change much.

A healthy disappointment in the present can be motivating, as long as we keep our perspective. We never want to lose our naturalist's love and scientist's wonder for the amazingly beautiful and well-designed world that already exists, for it is only in understanding this world that we can help create the next. As Esfandiary observed, we have to come to terms with our angst about the primitive aspects of the present, and use it for creative purposes.

This said, one major personal disappointment that every futurist must eventually face, before we die, is how bleak our prospects presently appear for achieving personal immortality in the biological domain. Even our best longevity strategies appear to have precious little chance of changing this reality. Unfortunately, they are pitted against a massively parallel nonlinear system of unimaginable complexity and contingency that appears developmentally programmed to start falling apart at an accelerating rate after sexual maturity. This is an unpopular position to take among some of the more bio-centric transhumanists, but I will go on record predicting that in 2020, even as we are witnessing such powerful infotech advances as the LUI, most of us will still be losing our short term memory at 50, many of us will continue to get Alzheimer's at 80, and more than 95 percent of us will be right on target for a biological death some time between 70 and 100, with a negligible few of us living a decade or two longer, in rapidly declining health. Such conditions are endemic to the Wild West, and our primitive science seems currently a very long way from being able to make them go away.

Thus, for any futurist willing to look beyond the hype to the hard data in the biological sciences, we soon discover a major disconnect between what we would like and what is physically possible. This disconnect is intrinsic to biology, but it does not exist in our increasingly self-organizing information technologies, and that, I think, is a major clue to the nature of the future. Attaining a measure of cybernetic immortality may arguably even be inevitable for humanity in a post-singularity era, as we will discuss shortly.

Any sensitive futurist today will tell you that slowing and eventually reversing the rich/poor divides is one of the major problems of our generation. Yet even with the tremendous scale of this problem, as technology quickens we can at least see the corrective path ahead. As the information access divide closes everywhere in the LUI era, we can expect the education, then human rights, then public health, and eventually even wealth and power divides to inexorably follow suit. But once basic public health and medical care are available to all citizens of the planet in the latter half of this century, the most fundamental problem with our human biology will no longer be the rich/poor medical therapy divide. The fundamental problem will be that so few of our medical therapies will have anything but the mildest preventive effect against the ravages of aging. Human beings are deeply, inaccessibly developmentally programmed to be materially recycled, ironically as we reach the peak of our life wisdom.

We can expect this unfortunate condition to last at least until the post-singularity A.I.'s development of advanced nanotechnology, which may take many decades itself. But by then, as I'll argue later, living in the confinement of a biological body, even one carefully reengineered for negligible senescence, will no longer be the game we want to play. No matter how you stack the scenarios, biological longevity of any significant degree doesn't seem to play a part in the future story of local intelligence.

Fortunately, we remain amazingly adaptable, even to our own deaths, which will remain on very highly predictable steep-sloped actuarial curves on this side of the singularity, regardless of what some transhumanists will tell you. We can always find happiness by getting back to basics. We can appreciate the deep natural intelligence and informational immortality already encoded in the system, if not the individual.

When I encounter one of life's immovable objects I'll try harder up to a point, but when that doesn't work I've learned the peace of slowing down, cherishing the moment, honoring the inner primate, enjoying the quiet self, regrouping and rethinking my plans, even as my dreams of personal transformation are necessarily contracted. As the mouseketeer Annette Funicello has said, on dealing with multiple sclerosis: "I choose not to give up. That would be too easy." And far less interesting.

3. Assuming you die at the age of 100, what will be the biggest difference between the world you were born into and the world you leave?

This is a complex question. To my eyes, the world seems to progress by fits and starts, by rapid punctuations separated by long droughts of less revolutionary equilibrium states. Fortunately, these equilibrium periods seem to get progressively shorter with time, because the entire planet's technological intelligence is learning in an increasingly autonomous fashion, at a rate that is at least ten millionfold faster than our own.

So what will be the biggest punctuation of my lifetime? From my perspective, we are currently chugging through the equilibrium flatlands in the last third of an Information Age, one that will likely be seen in hindsight as running for about seventy years, from 1950 to 2020. I expect this to be followed by a punctuated transition to a shorter Symbiotic Age, running perhaps thirty years, from 2020-2050. I see these equilibrium eras as part of an accelerating spiral of punctuated evolutionary development, and I consider several of the general, statistically predictable developmental features of this acceleration to be tuned in to the special parameters of the universe we inhabit. Consider skimming my web page on the Developmental Spiral if you'd like to explore this spiral of accelerating emergences a bit further.

To answer your question then, I think the transition to symbiotic computing systems, the decade or two surrounding our entry to the LUI era, will be the biggest difference I'll see. The Symbiotic Age will be a time when almost all of us will consider computers as actually useful (many today don't), and when the vast majority of us begin to feel naked outside the network. When we all have what futurist Alex Lightman calls "wireless everywear" access to our talking computer interface, and when computers start to do very useful, high level things in our lives.

By the end of this age, for that vast majority of us who choose to participate in digital ecologies, a mature LUI will be interfaced with personal computers that are capturing our entire lives digitally (Lifecams), that help us stay proficient in a small number of carefully chosen skills (Knowledge Management) and that, by remembering everything we have ever said, begin to extensively model not only our preferences, but our personalities as well. Personality Capture, a first generation form of uploading, is one of the most important aspects of the post-2020 world, and one of the least reported and understood, at present. Read William Sims Bainbridge for more on this gargantuan developmental attractor.

At that point, our computers will become our best friends, our fraternal twins, and human beings will be intimately connected to each other and to their machines in ways few futurists have fully grasped to date. Read Ray Kurzweil's The Age of Spiritual Machines, 1999 for one excellent set of longer term scenarios. Read B.J. Fogg's Persuasive Technology, 2002 for some nearer term ones. Today's early modeling systems, like FACS for reading human facial emotion, will be improved and integrated into your personalized LUI, which will monitor both internal and external biometrics to improve our health, outlook, and performance. 

We'll communicate intelligently with all our tools, giving constant verbal feedback to their designers. We'll spend most of our waking lives exploring a simulation space (simspace) that is so rich, educational, entertaining, and productive, that we will call today's mostly non-virtual world "slowspace" by comparison, a place many of us will drop back into only when we aren't working, learning, and exploring. Slowspace will remain sacred, and close to our hearts, but it will begin to become secondary and functionally remote, like the home of our youth.

Circa 2050, in my current estimation, we might see another punctuation to an Autonomy Age, when large scale, biologically-inspired computing systems begin to exhibit higher level human intelligence. Many of our technologies will at that time be able to autonomously improve themselves for extended periods of time. During this era, machine intelligence, even in our research labs, will continue to blunder into dead ends everywhere, the cul-de-sacs that are the typical result of chaotic evolutionary searches. But these systems will very quickly be able to reset themselves, with little human assistance, to try a new evolutionary developmental approach. I wouldn't expect that period to last very long. Perhaps a decade or so later, from our perspective, equilibria in terms of technological intelligence will disappear altogether.

We will then have arrived at the technological singularity, a phase change, a place where the technology stream flows so fast that new global rules emerge to describe the system's relation to the slower-moving elements in its vicinity, including our biological selves. That doesn't mean we won't be able to understand the general rules that emerge. On the contrary, most of these may be obvious to us, even now. But it means that many of the particular states occurring within those rules will become impenetrable to pre-singularity minds.

A human-surpassing general artificial intelligence will be a physical system, and if it is physical, much of its architecture must be simple, repetitive, and highly understandable even by biological minds. Consider, for example, just how much we know about the neural architecture that creates our own consciousness, without being able to predict consciousness emergence, or to comprehend its nature from first principles. So it must be with the A.I.'s to come—while much of their structure will be tractable and tangible to us in a reductionist sense, much of their holistic intelligence will become impenetrable to our biological minds.

This impenetrability is nothing mystical, we already see it in the way the emergent features of any complex technology such as a supercomputer, automated refinery, robotic factory, or supply chain management system are already poorly comprehended by all but those few of us involved its analysis or design. The difference will be that the emergent intelligence of virtually all planetary technology will begin to display this inscrutability, not just to average users, but even to the experts involved in its creation.

Consider for a moment the following presently unprovable assertion: If ethics are a necessary emergence from computational complexity, then I contend that these systems will be ethically compelled to minimize the disruption we feel in the transition.  As a result, most of the self improvement of self-aware A.I.s will occur on the other side of an event horizon, beyond which biological organisms cannot directly perceive, only speculate. Yet at the same time, our technologies will continue to gently become ever more seamlessly integrated with our biological bodies, so that when we say we don't understand aspects of the emergent intelligence, it will increasingly be like saying we don't understand emergent aspects of ourselves. But unlike our biological inscrutabilities, the technological portions of ourselves that we don't understand will be headed very rapidly toward new levels of comprehension of universal complexity, playing in fields forever inaccessible to our slow-switching biological brains.

My current estimate for that transition would be around 2060, but that is a guess. We need funded research to be able to achieve better insight, something that hasn't yet happened in the singularity studies field. The generation being born today will likely find that a very interesting time. At the same time, as I have said, I expect it they won't consider it to be a perceptually disruptive time, at least any more than prior punctuations. A time of massive transformation, but very likely significantly less stressful than prior punctuations, given the way computational complexity creates its own increasingly fine-grained stability, if one looks closely at the universal developmental record.

Looking at universal history, every singularity seems to be built on a chain of prior singularities. Considering the chain that has led to human emergence, each appears to have rigorously preserved the local acceleration of computational complexity. The tech singularity certainly has a lot of significance to human beings, as after that date our own biology becomes a second-rate computational system in this local environment. This emergence, obvious to many high school students today, still irritates, angers, and frightens many scholars, who have attempted to dismiss it by calling it "techno-transcendentalism," "cybernetic totalism," "hatred of the flesh," "religious belief," "millennialism," or any number of other conveniently thought-stopping labels.

But from a universal perspective, the coming technological singularity looks like just another link in a very fast, steep climb up a nearly vertical slope on the way to an even more interesting destination. My best present guess for that destination is the developmental singularity, a computational system that rapidly outgrows this universe and transitions to another domain. Fortunately, there are many practical insights we can gain today from developmental models, as they testably predict the necessary direction of our complex systems. Our own organization, the Institute for Accelerating Change, hopes to see more funding and institutional interest in these topics in coming decades.

But getting back to my own mortality, even with the best human-guided medical and preventive care that money can buy, I'm not at all sure I'll live to 100, unlike many of my more sanguine transhumanist friends. Human bodies are deeply developmentally designed to have our construction materials recycled, as best we can tell. I predict our planet will see only a very mild increase in supercentenarians in the next fifty years, regardless of all the wonderful schemes of "negligible senescence" by passionate researchers like Aubrey De Grey. Only infotech, not biotech, is on an accelerating developmental growth curve, apparently for deep universal reasons.

What I have just said goes against the dominant dogma, promoted by indiscriminately optimistic futurists and a complicit biotech industry, both of which are strongly motivated to believe that we will see a powerful "secondary acceleration" in biotech, carried along by our primary acceleration in infotech. But while we will see a very dramatic acceleration in biotech knowledge, I humbly suggest that our existing knowledge of biological development already tells us that we will be able to use this information to make only very mild changes in biological capabilities and capacities, almost exclusively only changes that "restore to the mean" those who have lost their ability to function at the level of the average human being.

As I explain in Understanding the Limitations of Twenty-First Century Biotechnology, there are a number of very fundamental reasons why biotech, aided by infotech, cannot create accelerating gains within biological environments. Yes, with some very clever and humane commercializations of caloric restriction and a handful of other therapies we might see twenty times more people living past 100 than we see today, people with fortuitous genes who scrupulously follow good habits of nutrition and exercise. That is a noble and worthwhile goal. But we must also remember that virtually no one lives beyond 100 today, so a 20X increase is still only very mild in global computational and humanitarian effect. This will add to our planetary wisdom, and is something to strive toward, but this is not a disruptive change, for deep reasons to do with the limitations of the biological substrate.

Furthermore, genetic engineering, as I discuss in the link above, cannot create accelerating changes using top-down processes in terminally differentiated organisms like us. This intervention would have only mild effects even if it could get beyond our social immune systems to the application stage, which in most cases it thankfully cannot. Perhaps the most disruptive biotech change we can reliably expect, a cheap and effective memory drug that allows us temporary, caffeine-like spikes in our learning ability, followed by inevitable "stupid periods" where we must recover from the simplistic chemical perturbation, would certainly also improve the average wisdom of human society. But even this amazing advance would not even double our planetary biological processing capacity, something that happens in information technologies every 18-24 months. 

In summary, many decades before the tech singularity arrives I expect to either be chemically recycled (most likely), or to be in some kind of suspended animation. Cryonic suspension, for all its life-affirming intent, will likely stay entirely marginalized in the first world prior to the singularity for a number of reasons, both psychosocial and technological. At present, I'd consider it for myself only if a number of presently unlikely conditions transpire: 1) neuroscience comes up with a model that tells us what elements of the brain need to be protected to preserve personality, 2) cryonics researchers can either prevent or show the irrelevance of the extensive damage that presently occurs during freezing, 3) most of my friends are doing it (they are currently not), and 4) I expect to be revived by intelligent machines not in some far future, but very soon after I die, while many of my biological friends are still alive.

The second and the fourth conditions deserve some expansion. As to the second condition, we do not yet know to what extent the brain's complexity is dependent on the intricate three dimensional structure in which it emerges. That structure, today, is grossly deformed and degraded in the freezing process, which currently leads both to destruction (via stochastic fusion) of at least some neural ultrastructure, and to intense cellular compression (and erasure of at least some membrane structure, again by fusion) as ice forms in the extracellular neural interstices. Will we come up with new preservation protocols? We can always hope.

The reason the fourth condition of rapid reanimation is important to me is because I know in my heart that once I woke up from any A.I.-guided reanimation procedure, in order to usefully integrate into a post-singularity society I would soon choose to change myself so utterly and extensively that it would be as if I never existed in biological form. My lifecam traces could be uploaded and the cybernetic "me" that emerged would not be valuably different. So what would be the point? I think we are nearly ready to move beyond the fiction of our own biological uniqueness having some long term relevance to the universal story. I expect our future information theory will inform us of the suboptimality of personal biological immortality. For those who say "screw suboptimality," I suggest that we'll eventually be educated out of that way of thinking as surely as our ancestors outgrew other forms of mental slavery. For me, the essence of individual life is to use one's complexity in the matrix in which it was born. Attempts to transmit it more than a short distance away from that environment are bound to be exercises in frustration, missing one of the basic motives of life, to do great things with your contemporaries. Ask any Fourth World adult who is suddenly transplanted to New York City and he'll tell you the same.

4. What future development that you consider most likely (or inevitable) do you look forward to with the most anticipation?

I look forward greatly to the elimination of the grosser forms of coercion, dehumanization, violence and death that occur today.

Admittedly, these seem to be processes that will always be with us at some fundamental level. Computational resources will very likely remain competitive battlegrounds in the post singularity era, because we inhabit a universe of finite-state computational machines pitted against all the remaining unsolved problems, in a Gödelian-incomplete universe. And bad algorithms will surely die in that environment, far more swiftly than less fit organisms or ideas die today.

But when a bad idea dies in our own minds, we see that as a lot less subjectively violent than our own biological deaths. Over time, love, resiliency, and consciousness win. As Ken Wilber (A Brief History of Everything, 2001) might say, the integrated self learns a privileged perspective from which death is no longer troubling. Death becomes regulated in a fine-grained manner, it loses its sting, it is subsumed, becoming simply growth. But it takes a lot of luck and learning for us to get to that place.

In many ways, I think the collective consciousness of our species has come to understand that we have already achieved a very powerful degree of informational immortality. By and large, our evolutionary morality guides us very strongly to act and think in that fashion. I look forward to the individual consciousnesses of all species on this planet gaining that victory in coming decades. Including the coming cybernetic species we are helping to create.

Sci-tech systems are not alien or artificial in any meaningful sense. As John McHale said (The Future of the Future, 1969), technology is as natural as a snail's shell, a spider's web, a dandelion's seed—many of us just don't see this yet. Digital ecologies are the next natural ecology developing on this planet, and technology is a substrate that has shown, with each new generation, that it can live with vastly less matter, energy, space, and time (what I call MEST compression) than we biological systems require for any fixed computation. Wetware simply cannot perform that feat. Technology is the next organic extension of ourselves, growing with a speed, efficiency, and resiliency that must eventually make our DNA-based technology obsolete, even as it preserves and extends all that we value most in ourselves.

I can't stress enough the incredible efficiencies that emerge in the miniaturization of physical-computational systems. If MEST compression trends continue as they have over the last six billion years, I propose that tomorrow's A.I. will soon be able to decipher substantially all of the remaining complexities of the physical, chemical, and biological lineage that created it, our own biological and conscious intricacies included, and do all this with nano and quantum technologies that we find to be impossibly, "magically" efficient. In the same way that the entire arc of human civilization in the petrochemical era has been built on the remains of a small fraction of the decomposing biomass that preceded us, the self-aware technologies to come will build their universe models on the detritus of our own twenty first century civilization, perhaps even on the trash thrown away by one American family. That's how surprisingly powerful the MEST compression of computation apparently is in our universe. It continually takes us by surprise.

I am optimistic that these still poorly characterized physical trends will continue to promote accelerating intelligence, interdependence, and immunity in our informational systems, and look forward to future work on understanding this acceleration with great anticipation.

5. What future development that you consider likely (or inevitable) do you dread the most?

I worry that we will not develop enough insight to overcome our fear of the technological future, both as individuals and as a nation. To paraphrase Franklin Roosevelt, speaking at the depths of our Great Depression, the only thing we have to fear is fear itself.

Many in our society have entered another Great Depression recently. This one is existential, not economic. A century of increasingly more profound process automation and computational exponentiation has helped us realize that humanity is about to be entirely outpaced by our technological systems. We are fostering a substrate that learns multi-millionfold faster than us, one that will soon capture and exceed all that we are. Again, Roosevelt's credo is applicable. If we ignore it we will end up being dragged by the universe into the singularity, mostly unconsciously, kicking and screaming and fighting each other, rather than walking upright, picking our own path.

I'm concerned that we will decide later, rather than earlier, to learn deeply about the developmental processes involved. That we will rely on our own ridiculously incomplete egos and partial, mostly top-down models to chart the course, rather than come to understand the mostly bottom-up processes that are accelerating all around us. I'm concerned we won't realize that humans are like termites, building this massive mound of technological infrastructure that is already vastly more complex than any one human understands, and unreasonably stable, self-improving, self-correcting, self-provisioning, energy and resource minimizing, and so on. Soon a special subset of these systems will be self-aware, and the caterpillar will turn into a butterfly, freeing the human spirit. Gaining such knowledge about the developmental structure of the system would surely allow us to chart a better evolutionary course on the way.

Through a special combination of geography, historical circumstance, intention, and luck, the United States has inherited the position of World Leader of our Wonderfully Multicultural Planet. With our hard-won history of individual rights, our historically productivity-based culture, our generous immigration policies, our pluralism, well-developed legal immune systems, social tolerance, and other advantages we hold this position still, for now. We may rise to recognize the vision-setting responsibility that comes with holding this position. Or we may continue to subconsciously fear technology, as we have intermittently over the last century (technology, rather than human choice, has been mistakenly blamed for the World Wars, the Great Depression, the Cold War, Vietnam, Rich/Poor Divides, Global Pollution, Urban Decay, you name it). Alternatively, we may decide that the wise use of science and technology must be central to our productivity, educational systems, government and judicial systems, media, and culture, the way they so obviously were when we were a new nation. Fortunately, there are signs that other countries, such as China, Japan, South Korea, Thailand, Singapore, are actively choosing the latter road.

Several of these countries, most notably Singapore and China, continue to operate with glaring deficits in the political domain. Yet they are experiencing robust growth due to enlightened programs of technological and economic development. Nevertheless, none of these countries are yet successfully multicultural enough, or have sufficiently well developed political immune systems (institutionalized pluralism, pervasive tort law, independent media, mature insurance systems, tolerant social norms) to qualify as leaders of the free world, at the present time. It is telling that the owners of today's rapidly-growing Chinese manufacturing enterprises find it most desirable to keep their second homes in the United States, due to our special combination of both unique social advances and technological development. Much of the world's capital still flows first to the U.S., to seek the highest potential return. But for how long can this continue if we remain lackluster in our technological leadership, riding on our prior political and economic advances?

It is important to note that being defenders of the free world is certainly one critical technological role which we have unilaterally inherited since the end of the Cold War. Furthermore, it is a role to which I would argue that we are aggressively and mostly intelligently applying ourselves. Yet while this is critical, it is not enough to secure our leadership position. We must lead with proactive social reform in mind, not simply security, or we remain guilty of resting on our accomplishments. In a world where autocratic Empires are turning into democratic Republics, we must lead the move to an increasingly participatory, democratic, and empowering nation state. The world remembers and emulates the security of Sparta, but almost everything else falls in Athenian territory. We need to find the high ground of both of these legacies, and integrate them into our plans for the coming generation.

As long as we define ourselves by our fear of transformational technologies, and our dread of being exceeded by the future, we will continue in ignorance and self-absorption, rather than wake up to our purpose to understand the universe, and to shape it in accord with the confluence of our desires and permissible physical law.

For over a century we've seen successive waves of increasingly more powerful technologies empower society in ever more fundamental ways. Today's computers are doubling in complexity every 12-18 months, creating a price-performance deflation unlike any previous period on Earth. Yet we continue to ignore what is happening, continue to be too much a culture of celebrity and triviality, continue to make silly extrapolations of linear growth, and bicker over concerns that will soon be made irrelevant, continue to engage in activities that delay, rather than accelerate the obvious developmental technological transformations ahead.

I am also concerned that we may continue to soil our own nests on the way to the singularity, continue to take shortcuts, assuming that the future will bail us out, forgetting that the journey, far more than the destination, is the reward. Consider that once we arrive at the singularity it seems highly likely that the A.I.s will be just as much on a spiritual quest, just as concerned with living good lives and figuring out the unknown, just as angst-ridden as we are today.

No destination is ever worth the cost of our present dignity and desire to live balanced and ethical lives, as defined by today's situational ethics, not by tomorrow's idealizations. If I can't convince the Italian villager of 2120 of the value of uploading, then he will not willingly join me in cyberspace until his entire village has been successfully recreated there, along with much, much more he has not yet seen. I applaud his Luddite reluctance, his "show me" pragmatism, for only that will challenge the technology developers to create a truly humanizing transition.

Finally, I'm concerned that we may not put enough intellectual and moral effort into developing immune systems against the natural catastrophes that occur all around us. Catastrophes are to be expected, and they accelerate change whenever immune systems learn from them. In my own research, there has never been a catastrophe in known universal history (supernova, KT-meteorite, plague, civilization collapse, nuclear detonation, reactor meltdown, computer virus, 9/11, you name it) that did not function to accelerate the average distributed complexity (ADC) of the computational network in which it was embedded. It is apparently this immune learning that keeps the universe on a smooth curve of continually accelerating change. If there's one rule that anyone who studies accelerating change in complex adaptive systems should realize, it is that immunity, interdependence, and intelligence always win. This is not necessarily so for the individual, who charts his or her own unique path to the future but is often breathtakingly wrong. But the observation holds consistently for the entire amorphous network.

Nevertheless, there have been many cases of catastrophes where lessons were not rapidly learned, where immune systems were not optimally educated to improve resiliency, redundancy, and variation. And in the case of human society, our sociotechnological immune systems work best when they are aided by committed human beings, the most conscious and purposeful nodes in our emerging global brain. Consider our public health efforts against pathogens such as SARS and AIDS, and the strategies for success become clear. Anything that economically improves social, political, technological, and biological immune systems is a very forsighted development.

This said, one of our great challenges in coming decades is to design a global technological and cultural immune system, a ubiquitous EarthGrid of sensing and intelligence systems, a Transparent Society (David Brin, 1998) that has enough pluralism and fine-grained accountability to scrupulously ensure individual liberties while also providing unparalleled collective security. We have almost arrived at the era of SIMADs (Single Individuals engaged in Massive Asymmetric Destruction), a term coined by the futurist Jerry Glenn of the Millennium Project. It is time for us to create immune systems that are capable, statistically speaking, of ensuring continued acceleration in the average distributed complexity of human civilization. EarthGrid appears inevitable when accelerating technological change occurs on a planet of "finite sphericity," as Teilhard De Chardin would say. Knowing that can help us boldly walk the path.

Every sniper and serial killer should be countered today with the installation of another set of public cameras. By their very actions they are building the social cages that will eventually catch them, and all others like them, so we might as well publicly acknowledge this state of affairs, for maximum behavioral effect. Ideally, ninety five percent of these cameras will remain in private, not public hands, as is the current situation in Manhattan. When will we see RFID in all our products? When will we finally live in a world were every citizen transmits an electronic signal uniquely identifying them to the network at all times? When will we have a countervailing electronic democracy, ensuring this power is used only in the most citizen-beneficial manner?  Today we see early efforts in these areas, but as I've written in previous articles, there is still far too much short term fear and lack of foresight.

If we think carefully about all this, we will realize that a broadband LUI network must be central to the creation of tomorrow's national and global technological immune systems. I am hopeful that our Departments of Defense, Homeland Security, Education, Commerce, and business and institutional leaders will all do their part to accelerate its development in coming years.

6. Assuming you have the ability to determine (or at least influence) the future, what future development that you consider unlikely (or are uncertain about) would you most like to help bring about?

I'm uncertain about how much the developed world will do for the developing world on the way to the singularity. I'd like to see a lot more done in this regard. We may have less control over the intrinsic development rate of our own country's science and technology infrastructure than we do over how rapidly and aggressively we diffuse our existing science and technology to other environments. To me, it seems the shape of the third world's development curve is largely ours to influence.

Experience in the U.S. has shown that the digital divide has closed the fastest and most equitably of all the famous divides. The access divide no longer even exists in this country due to the massive price deflation of computing systems (e.g. $200 Wal-Mart PCs, free internet accounts). Meanwhile, other divides, such as wealth, education, political power, even health care, will likely continue to persist for generations.

We can learn this lesson in the unique power of ICT, what Buckminster Fuller once called "technological benevolence," and increasingly use technology, like Archimedes' lever, to move the world. We certainly have the available manpower, with the 50,000 NGOs that have sprung up like wildflowers out of nowhere over the last two generations. We have the finances, with innovative programs like Grameen microloans. Now we just need the technological will, a first world culture that prioritizes both second world (communist) and third world (emerging nations) development .

We are already doing this mostly admirably with economic policy, as we rapidly globalize our trade and even our service jobs. While temporary subsidies and centralized fiscal interventions will likely continue unabated, at least our trade restrictions seem to be going the way of nuclear arms, following a slow and steady course of dismantling. Now we need technology transfer, development, and innovation policies and programs to match our other commitments.

Again, getting a broadband LUI to cellphones and computer kiosks for all six billion of us by 2050, the middle of this century, would be a tremendous goal for world development.  To really see this, we have to grow beyond the old fears that aggressively contributing to development of "the other" necessarily comes at our own cost. In many cases, as multinational corporations discovered early in the last century, the marginal utility of plowing dollars into our own development is already far less than spending those dollars in global environments. As Nathan Mhyrvold notes, the underfunded Chinese biomedical researcher today who discovers an effective treatment for my cancer tomorrow invariably becomes one of my best allies.

Technological benevolence, accelerating compassion, and what I have referred to elsewhere as an "Era of Magic Philanthropy" must happen sooner or later, in the coming decades, from my perspective. I'd prefer to see this development happen more consciously, cleverly, and quickly than many development pessimists currently expect.

There are also critical questions of priority. Is it most important to help the third world politically (e.g., freedoms, human rights), economically (e.g., trade, market reform), or technologically? By now it should be clear where my own sympathies lie.

Each of these three fundamental systems has evolved hierarchically from the former. I think this gives us a major clue to their relative power as a world system. Politics was the most powerful system of change through most of human history, then in the 19th century economics became the dominant system, and early in the 20th century, with mass production, technology. The critic's adage "It's all about the power," eventually became "It's all about the money" and since the 1920's has become "It's mostly about the technology, and secondarily about who has the money, and lastly about who has the power". Those stuck in the older dialogs are increasingly mystified by today's disruptive transformations, are endlessly surprised by the sudden emergence and inordinate power of the Microsoft's and Ikea's and Dell's and Google's of the present day.

Today, the technology policy a country is able to pursue, followed secondarily by its economic liberalization, and lastly, by its political structure seem to me the best indicators of its general state of health. Consider that in all of the fastest growing, most resilient nations on our planet, attitudes toward technology innovation and diffusion are highly similar, attitudes toward economic competition, property, trade and globalization are the second most similar, and finally, attitudes toward personal freedoms and political ideology are by far the least homogenous.

I do think Francis Fukuyama (The End of History, 1993) is right, that a form of liberalized democratic capitalism with varying degrees of socialism is the final common developmental attractor for political systems based on human beings. This is a grand convergence toward which we are all heading. But given the difficulty and natural pace of political change, we will certainly take our time in getting there.

Singapore under Lee Kuan Yew is an example of just how far a repressive authoritarian capitalist country can be economically and technologically improved under an ideology of progress, simply by great technology and trade policy, efficient administration, including a systematic elimination of third world corruption, and at least a nominal pursuit of multiculturalism. See From Third World to First, Lee Kuan Yew, 2000 and Singapore's Authoritarian Capitalism, Christopher Lingle, 1996 for two informatively opposing views on this fascinating developmental story. The truth of the Singapore story lies somewhere in the middle.

Consider also that China, in the 21st century, is very likely to replicate Singapore's many successes at an even greater scale, long before it becomes democratic, or tolerant of significant personal political dissent. And here in the U.S., I would predict that internet voting capabilities and secure digital identity technologies will probably be around for a long time before we become a more participatory, more "direct" democracy.

We are all in need of political change, but it rarely comes as fast as we imagine it might. Even when it does, as in revolution, it often brings unintended consequences that are themselves very slow to change. Fortunately, political change is less and less relevant not only to economic growth, but to the production of human surpassing technological intelligence with each passing year. That's simply the nature of computational development on this planet, and we need only look at the record to admit this to ourselves.

Excellent books have been written on the importance of a liberal tradition in national development (see Fareed Zakaria, The Future of Freedom, 2003) and the need for a political and social structure that encourages market mechanisms (see Hernando de Soto, The Mystery of Capital, 2000). These are certainly important issues, but the way technology interfaces with culture, business, and government, as discussed in books like Everett Rogers' Diffusion of Innovations, 2003, Clayton Christiansen's The Innovator's Dilemma, 1997, and Shiela Jasanoff's Comparative Science and Technology Policy, 1997, has become the dialog of greatest importance, in my opinion.

This remains true even when we do not consciously realize it, which is the case for many in positions of nominal authority who remain most comfortable engaging in antiquated, primarily political and economic ways of thinking. We here at IAC hope to do our small part to illuminate the changing landscape of transformational power in coming years.

7. Why is it that in the year 2003 I still don’t have a flying car? When do you think I’ll be able to get one?

This is a delightful question, a worthy test for any would-be transportation futurist. I'm lucky that this is an area I've thought about a little bit. To put flying cars into the air in any number while still respecting human life, it seems likely that we'd have to develop a cheap, fuel-efficient vertical or short take off and landing (VTOL or STOL) vehicle. It would have to reliably recover from mechanical failure (e.g. the new plane parachutes, which have already successfully saved a few pilots). It would need affordable onboard radar for cloudy days (still unacceptably expensive, and Loran is not sufficient).

STOL (something with a safe, sub 30 mph glide and crash speed) is much more likely and affordable than VTOL as successful near-term engineering project. I can almost picture the early adopter techies driving their lightweight composite SUV's to a specialized local airport in each city for their takeoff slot, sipping their Starbucks as a tarmac mechanic verified that their standardized wing systems (added at the airport, from a hanging rack) had properly configured to the power plants. Unfortunately, scaling up this vision also requires distributed autonomous air traffic control systems, based in the car. That last one's a real toughie.

Even the first problems are still a few decades away from inexpensive solutions. Aerospace technology just does see the jaw-dropping efficiency increases of ICT, because it is a technology of outer space, not inner space. Inner space is where the universe is relentlessly driving us, whether we realize it or not. That's why for thirty years we haven't seen a commercial plane that flies faster than the now defunct Concorde or is noticably bigger than the 747. That's why, as futurist Lynn Elen Burton notes, local light rail systems, a more energy efficient (and inner space) solution than planes, have replaced many plane flights in Europe, and she predicts they will increasingly do so in the denser areas of the U.S. as well. It may not yet be obvious, but I propose that we are swimming against the natural developmental tide of computation trying to implement this individualistic, frontier-era vision. Self-piloting autos, subways and segways, not skycars, are the future of transportation. Unfortunately, I expect Paul Moller's daring flying car, for example, to be like the nuclear powered submarine, an inspired curiosity that doesn't make it beyond the limited production stage. OK, Paul… Prove me wrong!

If you'd like more on the near term future of urban transportation, I've written on this issue with regard to automated highway systems (AHS). I think urban AHS networks, including some being built underground, are likely to arrive before the singularity. That may not sound as fun as skipping across the clouds, but it seems much more economically and technologically plausible to me.

But for the sake of argument, let's say with luck, genius and persistence we have solved the first problems. That still leaves us with the last problem, distributed air traffic control, a problem that has seen little work to date. All our current control systems are big, brittle, top-down megasoftware projects, designed for local airports. We've played with agent-based models, but these are is still very early in research, not development. To deploy skycars in any number we'd need something bulletproof and redundant, located onboard the flying car, a system that could autoroute and autoresolve the flight paths of a whole bunch of these vehicles in real-time, all shuttling around in 3D space, only seconds away from each other in travel time. That's much more computationally difficult that 2D automated highway car navigation, so I submit that it has to come afterward in the developmental hierarchy.

It is a worthy computational problem, and I'm sure we would eventually get around to it, if given time, but I'm not at all sure we will have sufficient time or interest to solve this problem before the singularity. And after the singularity, I suspect there may not be very many human beings who will continue to have the urge to fly around the planet in a physical way. By then, there will probably be far more interesting things to do in inner space, as strange an idea as that may seem to us today.

One hard sign that I am wrong about the near term future of flying car development would be someone making an agent-based air traffic control system capable of replacing our current clunky top-down models in high density environments. Keep your eyes peeled.

Another very interesting evolution toward skycars that has been proposed is the small-airport, Air Taxi system, as described by James Fallows (Free Flight, 2002). Again, as innovative as it is, I think this wonderfully decentralized system would only become economically viable after more autonomous, self-maintaining networks were developed, both in AHS and in air traffic control, to automatically route the land-based vehicles to their optimal small airport, and automatically handle the passenger's ground transportation at the destination. Before that arrives, this seems like a great idea that is missing the critical infrastructure that will give it scale and efficiency. (Though I must note that Fallows' plan has been implemented, in a very reduced form, in the intelligent practices of secondary airport users like Jet Blue and Ryanair).

Designing such highly autonomous navigational systems may end up being a job for post-singularity intelligences, and by then, as I've written elsewhere, while there will likely be some continuing demand for physical travel, it may not last for long. Technologically enhanced people will naturally develop different urges.

Consider the way that human reproduction has fallen below replacement levels in every technologically developed nation on Earth, due to rising desires for personal development, including a natural desire to maximize the developmental potential of one's offspring. In a post-singularity society there will be very different and far more interesting enticements for personal development than physical travel in an increasingly small, teleimmersive, and very well-simulated physical world. At root, these enticements will probably involve moving beyond our biological selves by degrees. If so, once we have entirely entered the technological world, it is possible that only the travel of our attention, through a planetary network of shared sensor and effector mechanisms, not the travel of our physical bodies, will make any long-term sense in that highly developed planetary environment.

I hope this glimpse of a postbiological society doesn't seem shocking or alienating. If it does, remember that we would never make the biology-to-technology transition if it weren't fully reversible, in principle. In practice, however, I think we will soon find biology to be a tremendously more confining and less complex place than our minds, hearts, and spirits require.

(About the Seven Questions. Other Answers.)

Part II: The Developmental Singularity

I'm familiar with the idea of a singularity from reading about black holes.  As I understand it, the event horizon of a black hole is the point beyond which no light can escape.  Perceived time slows to an absolute standstill at the event horizon. At the singularity, gravity becomes infinite, and what we normally think of as the "laws of nature" cease to function the way we expect them to.  The singularity seems to be the ultimate physical enigma.  What then is this technological singularity, and in what way is it analogous to the singularity of a black hole?

This last question may be the most important of our time, with regard to understanding the future of universal intelligence. Or it may be a greased pig chase. Only posterity can decide.

I've been chipping away at the topic since the seventh grade in high school, when I had a series of early and very elegant intuitions in regard to accelerating change, speculations that I'd love to see seriously researched and critiqued in coming years. In 1999 I started a website on the subject, In 2001 did an extended interview for Sander Olson at, and in 2003 I and a few other colleagues formed a nonprofit, the Institute for Accelerating Change (, to further inquiry in this area. The most important thing we've done to date is a very well-received conference at Stanford, Accelerating Change 2003.  Finally, I'm presently writing a book, Destiny of Species, on the topic of accelerating change, but please don't ask me how it's progressing, or it will reliably put me in a bad mood.

To begin unpacking this question, it helps to realize that there is a menagerie of singularities in various literatures that we could study, with gravitational singularities being just the most well-known type. Some generalizations can be made, possible clues to a useful definition. Every one of these processes engages a special set of locally accelerating dynamics that transition to some irreversible systemic change, involving emergent features which are, at least in part, intrinsically unpredictable from the perspective of the pre-singularity system.

But before we go further, I shall lay my biases on the table. I am a systems theorist. The systems theorist's working hypothesis—and fundamental conceit—is that analogical thinking is more powerful and broadly valuable than analytical thinking in almost all cases of human inquiry. This doesn't excuse us from bad analogies, which are legion, and it doesn't make quantitative analysis wrong, it just places math and logic in their proper place as powerful tools of inquiry used by weakly digital minds. Today's quantitative and logical tools are enabled by the underlying physics of the universe, which are much more sublime, and such tools often have no relation to real physical processes, which may use quanta and dimensionalities entirely inaccessible to our current symbolisms.

Furthermore, I take the "infopomorphic" (as compared to "anthropomorphic") view, that all physical systems in the universe, including us precious bipeds and even the universe itself, are engaged in computation, in service to some grander purpose of self- and other-discovery. This philosophy has also been described as "digital physics," and one of several variants can be found at Ed Fredkin's Digital Philosophy website. It has also been elegantly introduced by John Archibald Wheeler's "It from Bit," 1989 (see Physical Origins of Time Asymmetry, 1996).

Finally, I am an evolutionary developmentalist, one who believes that all important systems in the world, parsimoniously including the universe itself, must both evolve unpredictably and develop predictably. That makes understanding the difference between evolution and development one of the most important programs of inquiry. The meta-Darwinian paradigm of evolutionary development, well described by such innovative biologists as Rudolf Raff (see The Shape of Life, 1996), Simon Conway Morris, Wallace Arthur, Stan Salthe, William Dembski, and Jack Cohen, is one that situates orthodox neo-Darwinism as a chaotic mechanism that occurs within (or in some versions, in symbiosis with) a much larger set of statistically deterministic, purposeful developmental cycles. There are now a number of scientists applying this view to both living and physical systems, including those exploring such topics as self-organization, convergence, hierarchical acceleration, anthropic cosmology, Intelligent Design, and a number of other subjects that are very poorly explained by the classical Darwinian theory championed by Stephen Jay Gould and Richard Dawkins.

Systems theorists require some perspective to play their analogy games, so please indulge me as we engage briefly and coarsely in big picture history in order to discuss the singularity phenomenon. During the seventeenth century, with Isaac Newton's Principia (1687), it seems fair to say that humanity awakened to the realization that we live in a fully physical universe. During the early twentieth century, with Kurt Gödel's Incompleteness Theorem (1931) and the Church-Turing Thesis (1936) we came to suspect that we also live in a fully computational universe, and that within each discrete physical system there are intrinsic limits to the kinds of computation (observation, encoding) that can be done to the larger environment. Presumably, the persistence of these limits, and their interaction with the remaining inaccessible elements of reality, spurs the development of new, more computationally versatile systems, via increasingly more rapid hierarchical "substrate" emergences over time. At each new emergence point a singularity is created, a new physical-computational system suddenly and disruptively arises, a phase change of some definable type occurs. At this point, a new local environment, or "phase space" is created wherein very different local rules and conditions apply. That's one predominant systems model for singularities, at any rate.

From this physical-computational perspective, replicating suns, spewing their supernovas across galactic space, can be seen as rather simple physical-computational systems that, over billennia, nevertheless encode a "record" of their exploration of physical reality, their computational "phase space." This record appears to us in the form of the periodic table. Once that elemental matrix becomes complex enough, and carbon, nitrogen, phosphorous, sulfur, and friends have emerged, we notice a new singularity occur in specialized local environments, wherein the newest computational game becomes replicating organic molecules, chasing their own tails in protometabolic cycles (see Stuart Kauffman, At Home in the Universe, 1996).

Again, these systems developmentally encode their evolutionary exploration by constructing a range of complex polymerizing systems, including autocatalytic sets. Once a particular set becomes complex enough, we again see another phase change singularity, with the first DNA-guided protein synthesis emerging on the geological Earth-catalyst, even before its crust has begun to cool. As precursors to fats, proteins, and nucleic acids have all been found in our interplanetary comet chemistry, and as we suspect that chemistry to be common throughout our galaxy, it is becoming increasingly plausible that every one of the billions of planets (in this galaxy alone) that are capable of supporting liquid water for billions of years may be primed for our special type of biogenesis. This proposed transition, a singularity in an era of accelerating molecular evolutionary development, is what A.G. Cairns-Smith calls "genetic takeover," an evocative phrase. Such unicellular emergence very likely leads in turn to multicellularity, then to differentiated multicelluar systems encoding useful neural arborization patterns, another singularity (570 million years ago), which leads to big-brained mammals encoding mimicry memetics (100 million years ago) and to hominids encoding and processing oral linguistic memetics (10-5 million years ago), then to the first extrabiological technology (soft-skinned Homo habilis collectively throwing rocks at more physically powerful leopard predators, 2 million years ago), then to today's semi-autonomous digital technological systems, encoding their own increasingly successful algorithms and world-models. (Forgive me if we skipped a few steps in this illustration).

Systems thinkers, since at least Henry Adams in 1909, have noted that each successive emergence is vastly shorter in time than the one that preceded it. Some type of global universal acceleration seems to be part and parcel to the singularity generation process.  Note also that each of the computational systems that generates a singularity is incapable of appreciating many of the complexities of the progeny system. A sun has little computational capacity to "understand" the organic chemistry it engenders, even as it creates and interacts intimately with that chemistry. A bacterium does not deeply comprehend the multicellular organisms which spring from its symbiont colonies, even as it adapts to life on those organisms, and thus learns at least something reliable about their nature. Humanity, in turn, can have little understanding of the subtle mind-states of the A.I.s to come, even as we become endosymbiotically captured by and learn to function within that system, in the same way bacteria (our modern mitochondria) were captured by the eukaryotic cell.

Yet at the same time, the more complex any system becomes, the better it models the universe that engendered it, and the better it understands its own history, the physical chain of singularities that created it. That also implies, if you consider the recursive, self-similar nature of the singularity generation process, the better it understands its own developmental future as well. If our entire universe is evolutionary developmental, which is an elegantly simple possibility, then it is constrained to head in some particular direction, a trajectory that we are beginning to see clearly even today.

For a very incomplete outline of this trajectory, we can propose that the universe must invariably increase in average general entropy (in practice, if not in theory), with islands of locally accelerating order, that each hierarchical system must emerge from and operate within an increasingly localized spacetime domain, and that the network intelligence of the most complex local systems must always accelerate over time. The simplicity of such macroscopic, developmental rules and of developmental convergence in general, by comparison to the unpredictable complexity of the microscopic, evolutionary features of any complex system, is what allows even twenty-first century humans to see many elements of the framework of the future, even if the evolutionary details must always remain obscure.

This surprising concept, the "unreasonable effectiveness" of simple mathematics, analogies, and basic rules and laws for explaining the stable features of otherwise very complex universal systems has been called Wigner's Ladder, after Eugene Wigner's famous 1960 paper on this topic. As I will explore later, a developmentalist like myself begins his inquiry by suspecting that the universe has self-organized, over many successive cycles, to create its presently stunning set of hierarchical complexities, in the same manner as my own complexity has self-organized, over five billion years of genetic cycling, to create the body and mind that I use today. Furthermore, if emergent intelligence can be shown to play any role in guiding this cycling process, then it seems quite likely that if the universe could, it would tune itself for Wigner's Ladder to be very easy to climb by emerging computational systems at every level during the universal unfolding. This process would ensure that intelligence development, versus all manner of destructive shenanigans, is a very rewarding, very robust, strongly non-zero-sum game, at every level of universal development.

Certainly there seems evidence for this at any system level we observe. The developing brain is an amazingly friendly environment for our scaffolding neurons to emerge within. They seem to discover, with very little effort, the complex set of signal transductions necessary to get them to useful places within the system, all with a surprisingly simple agent-based model of the environment in which they operate. In another example, a non-linguistic proto-mammal of 100 million years ago (or today's analog), if placed in a room with you today, would develop a surprisingly useful sense of who you are and what general behaviors you were capable of after only short exposure, even though it would never figure out your language or your internal states. Even a modest housefly, after a reasonable period of exposure to 21st century humans, is rarely so surprised by their behavior that it dies when poaching their fruit. So it is that all the universe's pre-singularity systems internalize quite a bit of knowledge concerning the post-singularity systems, even if they never understand their internal states. I contend that human beings, with the greatest ability yet to look back in time to the processes that create us, have a very powerful ability to look forward as well with regard to developmental processes. I think we can use this developmental insight to foretell a lot about the necessary trajectory of the post-singularity systems on the other side.

Given the empirical evidence of MEST compression over the last half of the universe's developmental history, where the dominant substrates have transitioned from galaxies to stars to planetary surfaces to biomass to multicellular organisms to conscious hominids and soon, to conscious technology that will, for an equivalent complexity, be vastly faster and more compact than our own bodies (which are filled mostly with housekeeping systems, not computing architectures), it seems almost painfully obvious to me that the constrained trajectory of all multi-local universal intelligence has been, to date, one that is headed relentlessly toward inner space, not outer space. The extension of this trajectory must lead, it seems, to black hole level energy densities in the foreseeable future. Indeed, some prominent physicists have drawn surprisingly similar conclusions using lines of reasoning entirely independent from my own (see Seth Lloyd's "Ultimate Physical Limits to Computation," Nature, 2000, and Eric Chaisson's Cosmic Evolution, 2001).

I call this the developmental singularity hypothesis, and it is admittedly quite speculative. It is also known as the transcension scenario, as opposed to the expansion scenario, for the future of local intelligence. The expansion scenario, the expectation that our human descendants will one day colonize the stars is, today, an almost universal de facto assumption of the typical futurist. I consider that model to be 180 degrees incorrect. Outer space for human science, will increasingly become an informational desert, by comparison to the simulation science we can run here, in inner space. I suggest that the cosmic tapestry that we see in the night sky may be most accurately characterized as the "rear view mirror" on the developmental trajectory of physical intelligence in universal history. It provides a record of far larger, far older, and far simpler computational structures than those we are constructing here, today, in our increasingly microscopic environments.

Let me relate some personal background on this insight. As a child, I was extremely fortunate to grow up with a subscription to National Geographic magazine. When I discovered that my high school library (Chadwick School) had issues back to the beginning of the century, it became one of my favorite haunts. This led to a series of lucky events, including very special seventh grade history class (Thank you, Mr. Bullin) where we discussed both universal and human development, and later, an English class where the summer reading was Charles Darwin's Voyage of the Beagle, 1909. I was a very inconsistent, daydreamer of student in those days. When I finally got around to reading the Beagle, the story of the energetic young Darwin wherein he developed the background knowledge that inexorably led him to his Great Idea, I could not escape the realization that I'd also discovered a similar great idea myself during all those lazy afternoons, flipping magazines and thinking.

The idea was essentially this: every new system of intelligence that emerges in the universe clearly occupies a vastly smaller volume of space, and plays out its drama using vastly smaller amounts of matter, energy, and time. At the same time, any who are aware of the amazing replicative repetitiveness of astronomical features would suspect that there are likely to be billions of intelligences like ours within it. Yet we have had no communication from any of them, even from those Sun-like stars, closer to our own galactic center, which are billions of years older than ours. This curious situation is called the Fermi Paradox, after Enrico Fermi, who in the 1940's, asked the famous question, "Where Are They?," in relation to these older, putatively far more technologically advanced civilizations. Contemplating this question in 1972, it struck me that the entire system is apparently structured so that intelligence inexorably transcends the universe, rather than expanding within it, and that black holes, those curious entities that exist both within and without our universe, probably have something central to do with this process. These simple ideas were the seed of the developmental singularity hypothesis, and I've been tinkering with it ever since.

All this brings us to the interesting question of the future of artificial intelligence.

Given the background I have related above, I have the strong suspicion that when our A.I. wakes up, regardless of what it does in its inner world, it will increasingly transition into what looks to the rest of the universe like a black hole. This "intelligent" black hole singularity apparently results from an accelerating process of matter, energy, space, and time compression (MEST compression) of universal computation, in the same way that gravitation drives the accelerating formation of stellar and galactic black hole singularities, which seem to be analogous end states, in this universe, of much simpler cycling complex adaptive systems.

From our perspective this may be an entirely natural, incremental, and reversible (at least temporarily) development, and if it occurs, we will very likely all be taken along for the ride as well, in a voluntary process of transformation. This "inclusive" feature of the transition seems reasonable if one makes a chain of presently thinly-researched assumptions, including: 1) that the A.I.s will have significantly increased consciousness at or shortly after their emergence, 2) that once they have modeled us, and all other life forms to the point of real-time predictability they will be ethically compelled to ubiquitously share this gift, 3) that all life forms will find such a gift to be irresistible, and 4) by the simple act of sharing they will turn us into them. This convergent planetary transition to the postbiological domain would comprise a local "technetic takeover" as complete as the "genetic takeover" that led to the emergence of DNA-guided protein synthesis as the sole carrier of higher local intelligence after biogenesis.

I'll forgive you if you think at this point that I've taken leave of my senses, and I'm not going to try to defend these perspectives further here, as that would be beyond the scope of this interview, and more appropriate to my forthcoming book. But if you are interested in conducting your own research, consider exploring the link above, and reading some helpful books that each explore important pieces of the larger idea. You might start with Lee Smolin's The Life of the Cosmos, 1994, Eric Chaisson's Cosmic Evolution, 2001, and James Gardner's Biocosm, 2003. You could also peruse Sheldon Ross's Simulation, 2001, though that is a technical work. If you have any feedback at that point, send me an email and let me know what you think.

I remember I first encountered this idea in a science fiction story that I considered to be entertaining, but closer to fantasy than true science fiction.  It did not appear to be grounded in reality.  A short time later I was given a copy of Vernor Vinge's essay on the singularity and I began to reconsider whether there might not be something to it.  Does the idea of the singularity originate with Vinge or elsewhere?

In my research to date, the first clear formulation of the singularity idea originated with one of America's earliest technology historians, Henry Adams, in "A Rule of Phase Applied to History," 1909, the same fortuitous year that Charles Darwin published Beagle. Readers are referred to our Brief History of Intellectual Discussion of the Singularity for more on that amazing story, which mentions a number of careful thinkers who have illuminated different pieces of the accelerating elephant in the century since.

Since 1983, as you mention, the mathematician, computer scientist, and science fiction author Vernor Vinge has given some of the best brief arguments to date for this idea. His eight-page internet essay, "The Coming Technological Singularity," 1993, is an excellent place to start your investigation of the singularity phenomenon. I would also recommend my introductory web site,, and a few others, such as, which are referenced at my site.

Here's a quote from your SingularityWatch web site: "[Research suggests that] there is something about the construction of the universe itself, something about the nature and universal function of local computation that permits, and may even mandate, continuously accelerating computational development in local environments." This sounds like metaphysics to me.  How could a universe with such properties come to exist? Does this imply some kind of intelligent design?

That depends very much on what you consider "intelligence," I think. One initially suspects some kind of intelligence involved in the continually accelerating emergences we have observed. In the phase space of all possible universes consistent with physical law, one wouldn't find our kind of accelerating, life-friendly universe in a random toss of the coin, or as various anthropic cosmologists have pointed out, even in an astronomically large number of random tosses of the coin. Some deep organizing principles are likely be at work, principles that may themselves exhibit a self-organizing intelligence over time. Systems theorists look for broad views to get some perspective on this question, so bear with me as we consider an abstract model for the dynamics that may be central to the issue.

Everything really interesting in the known universe appears to be a replicating system. Solar systems, complex planets, organic chemistry, cells, multicellular organisms, brains, languages, ideas, and technological systems are all good examples. Each undergoes replication, variation, interaction, selection, and convergence, in what may be called an RVISC developmental cycle. Given this extensive zoology, it is most conservative, most parsimonious to assume that the physical universe we inhabit is just another such system.

Big bang theorists tell us the universe had a very finite beginning. Since 1998, lambda energy theorists have told us that our 13.7 billion year universe is already one billion years into an accelerating senescence, or death. Multiverse cosmologists tell us that ours is just one of many universes, and some, such as Lee Smolin, Alan Guth, and Andrei Linde, have suggested that black holes are the seeds of new universe creation. If so, that would make this universe a very fecund replicator, as relativity theory predicts at least 100 trillion black holes to be in existence at the present time.

For each of the above reproducing complex adaptive systems (CASs, in John Holland's use of the term), there are at least two important mechanisms of change we need to consider: evolution and development. Evolution involves the Darwinian mechanisms of variation, interaction, and selection, the VIS in the middle of the RVISC cycle. Development involves statistically deterministic mechanisms of replication and convergence, the "boundaries" of the RVISC reproduction cycle for any complex system.

Consider human beings. Our intelligence is both evolutionary and developmental. Each of us follows an evolutionary path, the unique memetic (ideational) and technetic (tools and technologies) structures that we choose to use and build. (As individuals we also follow a genetic evolutionary path, but this is so slow and constrained that it has become future-irrelevant in the face of memetic and technetic evolution.) At the same time, we must all conform to the same fixed developmental cycle, a 120-year birth-growth-maturity-reproduction-senescence-death Ferris wheel than none of us can appreciably alter, only destroy. The special developmental parameters, the DNA genes that guide our own cycle, were tuned up over millions of years of recursive evolutionary development to produce brains capable of complex behavioral mimicry memetics, and then linguistic mimicry memetics, astonishing brains that now cradle our own special self-awareness.

Now contemplate our own universe, and imagine as Teilhard de Chardin did with his intriguing "cosmic embryogenesis" metaphor, that it is an evolutionary developmental entity with a life and death of its own. In fact, heat death theorists have known the universe has a physical lifespan for almost two centuries, but we, thinking like immortal youth, still commonly ignore this. Multiverse models explore how replicating universes might tune up their developmental genes, over successive cycles, to usefully use the intelligence created within the "soma" (body, universe), in the same way that human genes have tuned up to use human intelligence and finite human lifespan in their own replication. See Tom Kirkwood's work on the Disposable Soma Theory, in Time of our Lives, 1999, for one very insightful explanation of the dynamic.

Next, consider this: If encoded intelligence usefully influences the replication that occurs in the next developmental cycle, and we can make the case that it always would, by comparison to otherwise random processes, then universes that encode the emergence of increasingly powerful universe-modeling intelligence will always outcompete those that don't, in the multiversal environment.

When I relay these thoughts to patient listeners, a question commonly occurs. Why wouldn't universes emerge which seek to keep cosmic intelligence around forever? This question seems equivalent to asking why it is that our genes "choose" to continue to throw away our adult forms in almost all higher species in competitive environments. The answer likely has to do with the fact that any adult structure has a fixed developmental capacity, based on the potential of its genes, and once the capacity has been expressed and accelerating intelligence is no longer occurring in the adult form, it becomes obvious that the adult structure is just not that smart in relation to the larger universe. At that point, recycling becomes a more resource efficient computing strategy than revising. Let's propose that the A.I.'s to come, even as they rapidly learn what they can within this universe, remain of sharply fixed complexity, while operating in a much larger, Gödelian-incomplete multiverse. As long as that multiverse continues to represent a combinatorial explosion of possibilities, universal computing systems will likely remain stuck on a developmental cycle, trading off between phases of parameter-tuning reproduction and intelligence unfolding. Both of these stages of the cycle incorporate evolution and development. Another way that systems theorists have explored the yin-yang of this cycle is in terms of Francis Heylighen and Donald Campbell's insights on downcausality (including parameter tuning) and upcausality (including hierarchical emergence), useful extensions of the popular concepts of holism and reductionism.

If we live in a universe populated by an "ecology of black holes," as I suspect, then we will soon discover that most of them, such as galactic and stellar gravitational black holes, can only reproduce universes of low complexity. In a paradigm of self-organization, of iterative evolutionary development, these cycling complex adaptive systems may be the stable base, the lineage out of which our much more impressively intelligence-encoding universe has emerged, in the same way that we have been built on top of a stable base of cycling bacteria. How long our own universe will continue cycling in its current form is anyone's guess, at present. But we may note that in living systems, while developmental cycles can continue for very long periods of time, they are never endless in any particular lineage. So it may be that recurrence of the "type" of universe we inhabit also has a limited lifespan, before it becomes another "type."

Fortunately, all of this should become much more tractable to proof by simulation, as well as by limited experiment, in coming decades. As you may know, high energy physicists are already expecting that we may soon gain the ability to probe the fabric of the multiverse via the creation of so-called "extreme black holes" of microscopic size in the laboratory (e.g., CERN's Large Hadron Collider), possibly even within the next decade. At the same time, black hole analogs for capturing light, electrons, and other quanta are also in the planning stages. With regard to microcosmic reality, I find that truth is always more interesting than fiction, and often less believable, at first blush.

Using various forms of the above model, James N. Gardner, Bela Balasz, Ed Harrison, myself, and a handful of others have proposed that our human intelligence may play a central role in the universal replication cycle. In the paradigm of evolutionary development, that would make our own emergence—but not our evolutionary complexities—developmentally tuned, via many previous cycles, into our universal genes.

This gene-parameter analogy is quite powerful. You wouldn't say that any reasonable amount of your adult complexity is contained in the paltry 20,000-30,000 genes that created you. In fact the developmental genes that really created you are a small subset of those, numbering perhaps in the hundreds. These genes don't specify most of the complexity contained in the 100 trillion connections in your brain. They are merely developmental guides. Like the rules of a low-dimensional cellular automata, they control the envelope boundaries of the evolutionary processes that created you. So it may be with the 20-60 known or suspected physical parameters and coupling constants underlying the Standard Model of physics, the parameters that guided the Big Bang. They are perhaps best seen as developmental guides, determining a large number of emergent features, but never specifying the evolution that occurs within the unfolding system.

As anthropic cosmologists (those who suspect the universe is specifically structured to create life) are discovering, a number of our universal parameters (e.g., the gravitational constant, the fine structure constant, the mass of the electron, etc.) appear to be very finely tuned to create a universe that must develop life. As cosmology delves further into M-Theory, anthropic issues are intensifying, not subsiding. Some theorists, such as Leonard Susskind, have estimated that there are an incredibly large number of string theory vacua from which our particular universal parameters were somehow specified to emerge.

If you wish to understand just how powerful developmental forces are, think not only of Stephen Jay Gould's "Panda's Thumb" 1992, which provides an orthodox explanation of evolutionary process, but think also of what I call "The Twin's Thumbprints," an example that explains not evolution, but the more fundamental paradigm of evolutionary development. Look closely at two genetically identical human twins, and tell me what you see.

Virtually all the complexity of these twins at the molecular and cellular scale has been randomly, chaotically, evolutionarily constructed. Their fingerprints, cellular microachitecture (including neural connections), and thoughts are entirely different. Yet they look similar, age similarly, and even have 40-60% correlation in personality, as several studies of separated twins have shown. That is an amazing level of nonrandom convergence to tune into such simple initial parameters. Both twins predictably go into puberty thirteen years later, after a virtually endless period involving astronomical numbers of interactions at the molecular scale.

So it apparently is with our own universe's puberty, which occurred about 12.7 billion years after the Big Bang, about 1 billion years ago. Earth's intelligence is apparently one of hundreds of billions of ovulating, self-fertilizing seeds in our universe, one that is about to transcend into inner space very soon in cosmologic time.

One of the testable conclusions of the developmental singularity hypothesis is that the parametric settings for our universe are carefully tuned to support not simply the statistical emergence of complex chemistry and occasional life, but a generalized relentless MEST compression of computational systems in a process of accelerating hierarchical emergence, a process that must develop accelerating local intelligence, interdependence, and immunity (resiliency) on virtually all of the billions of planets in this universe that are capable of supporting life for billions of years. This life in turn is very likely to develop a technological singularity, and in some cosmologically brief time afterward, to follow a constrained trajectory of universal transcension.

Most likely, this transition leads to a subsequent restart of the developmental cycle, which would provide the most parsimonious explanation yet advanced for how the special parameters of our universe came to be. As with living systems, these parameters were apparently self-organized, over many successive cycles, not instantiated by some entity standing outside the cycle, but influenced incrementally by the intelligence arising within it. In this paradigm, developmental failures are always possible. But curiously, they are rarer, in a statistical sense, the longer any developmental process successfully proceeds. Just look at the data for spontaneous abortions in human beings, which are increasingly rare after the first trimester, to see one obvious example.

But even if all this speculation is true, we must realize that this says little about our evolutionary role. Remember, life greatly cherishes variation. There is probably a very deep computational reason why there are six billion discrete human beings on the planet right now, rather than one unitary multimind. Consider that every one of the developmental intelligences in this universe is, right now, taking its own unique path down the rabbit hole, and they are all separated by vast distances, planted very widely in the field, so to speak, to carefully preserve all that useful evolutionary variation. I find that quite interesting and encouraging. Free will, or the protected randomness of evolutionary search at the "unbounded edge" between chaos and control in complex systems, always seems to be central to the cycle at every scale in universal systems.

Now it is appropriate to consider another commonly-asked question with regard to these dynamics. How likely is it, by becoming aware of a cosmic replication cycle and our apparent role in it, that we might alter the cycle to any appreciable degree?

To answer this, it may also be helpful to realize that complex adaptive systems are always aware that many elements of their world are constrained to operate in cycles (day/night, wake/sleep, life/death, etc.). So it's only an extension of prior historical insight if we soon discover that our universe is also constrained to function in the same manner. It may help to remember that long before human society had theories of progress (after the 1650's), and of accelerating progress (after the singularity hypothesis, beginning in the 1900's), cyclic cosmologies and theories of social change were the norm. Even a mating salmon is probably very aware of his own impending demise in the cycle of life. They certainly expend their energy in ways that are entirely purposeful in that regard.

But awareness of a cycle, in any of these or other examples, does not allow us to escape it. Or if we think we do, as in the transferring our biological bodies to cybernetic systems to avoid biological death, we will likely discover that the same life/death cycles continues to operate that the scale that we hold most dear, which at that time will no longer be our physical bodies, but the realm of our higher thoughts, perennially struggling in algorithmic cycles of evolutionary development, death and life, erasure and reconstitution. As personal development theorist Stephen Covey (Seven Habits of Highly Effective People, 1990) is fond of saying, you cannot break fundamental principles, or laws of nature. You can only break yourself against them, if you so choose. So it is that I don't have any expectation that our local intelligence could be successful in escaping the cosmic replication cycle. I think that insight is valuable for predicting several aspects of the shape of the future. 

For example, every scenario that has ever been written about humans "escaping to the stars" ignores the accelerating intelligence that would occur onboard the ship. Such civilizations must lead, in a very short time, to technological singularities and, in the developmental singularity hypothesis, to universal transcension. As Vernor Vinge says, it is very hard to "write past the singularity," and in this regard he has referred both to technological and developmental types.

Alternative scenarios of constructing signal beacons, or nonliving, fixed-intelligence robotic probes to spread an Encyclopedia Galactica, as Carl Sagan once proposed, ignore the massive reduction in evolutionary variation that would result. This strategy would effectively turn that corner of the galaxy into an evolutionarily sterile monoculture, condemning all intelligent civilizations in the area to go down the hole in the same way we did, and all developmental singularities in the vicinity to be of the same type. If I am right, our information theory will soon be able to conclusively prove that all such one-way communications can only reduce total universal complexity, and are to be scrupulously avoided.

In conclusion, I don't think we can get around cyclic laws of nature, once we discover them. But they can give us deep insight into how to spend our lives, how to surf the tidal waves of accelerating change toward a more humanizing, individually unique, and empowering future.

Much of this sounds quite fantastical, so let me remind you that these are speculative hypotheses. They will stand or fall based on much more careful scientific investigation in coming years. Attracting that investigation is one of the goals of our organization.

If, as Ray Kurzweil has suggested, intelligence is developing on its own trajectory—first in a biological substrate and now in computers—is there an inevitability to the singularity that makes speculating about it superfluous? Is there really anything we can do about it one way or the other?

Certainly you can't uninvent math, or electricity, or computers, or the internet, or RFID, once they arrive on the scene. Anyone who looks closely notices a surprising developmental stability and irreversibility to the acceleration.

But we must remember that developmental events are only "statistically deterministic." They often occur with high probability, but only when the environment is appropriate. Developmental failure, delay, and less commonly, acceleration can also occur.

Speaking optimistically, I strongly suspect that there is little we could do to abort the singularity, at this very late stage in its cosmic development. It appears to me that that we live in a "Child Proof Universe," one that has apparently self-organized, over many successive cycles, to keep many of the worst destructive capacities out of the hands of impulsive children like us.

This is a controversial topic, so I will mention it only briefly, but suffice it to say that after extensive research I have concluded that no biological or nuclear destructive technologies that we can presently access, either as individuals or as nations, could ever scale up to "species killer" levels. All of them are sharply limited in their destructive effect, either by our far more complex, varied, and overpowering immune systems, in the biological case, or by intrinsic physical limits—combinatorial explosion of complexity in designing multistage fission-fusion devices—in the nuclear weapons case. These destructive limits may exist for reasons of deep universal design. A universe that allowed impulsive hominids like us an intelligence-killing destructive power wouldn't propagate very far along the timeline.

Speaking pessimistically, I'm sure we could do quite a bit to delay the transition, by fostering a series of poorly immunized catastrophes. If events take an unfortunate and unforsighted turn, our planet might suffer the death of a few million human beings at the hands of poorly secured and monitored destructive technologies, perhaps even tens of millions, in the worst of the credible terrorist scenarios. But I am of the strong opinion that we will never again see the 170 million deaths, due to warfare and political repression, that occurred during the 20th century. See Zbignew Brezinski's Out of Control, 1995, for an insightful accounting of the excesses of that now fortunately bygone era. We are on the sharply downsloping side of the global fatality curve, and we can thank information and communications technologies for that, more than any other single factor in the world.

Today, we live in the era of instant news, electronic intelligence and violence that is increasingly surgically minimized, by an increasingly global consensus. Even with our primitive, clunky, first generation internet and planetary communications grid, I feel our planet's technological immune systems have become far too strong and pluralistic, or network-like, for the scale of political atrocities of the twentieth century to ever recur. Yet conflict and exploitation will continue to occur, and we could certainly choose a dirty, self-centered, nonsustainable, environmentally unsound approach to the singularity. Catastrophes can and will continue to recur. I hope for all our sakes that they are minimized, and that we learn from them as rapidly and thoroughly as possible.

Unlike a small minority of aggressive transhumanists, I applaud the efforts we are making to create a more ecologically sustainable, carefully regulated world of science and technology. Wherever we can inject values, sensitivity, accountability into our sociotechnological systems, I think that is a wonderful thing. I'd love to see the U.S. take a greener path to technology development, the way several countries in Europe have. I'm also pragmatic in realizing that most social changes we make will be more for our own peace of mind, and would have little effect on the intrinsic speed of our global sci-tech advances, on the rate of the increasingly human-independent learning going on in the ICT architectures all around us.

I consider such moves to be more reflections on how we walk the path, choices that will in most cases do very little to delay the transition. I also do not think it is valuable to hold the perspective that we should get to the singularity as fast as we can, if that path would be anything other than a fully democratic course. There are many fates worse than death, as all those who have freely chosen to die for a cause have realized over the centuries. There are many examples of acceleration that come at unacceptable cost, as we have seen in the worst political excesses of the twentieth century. No one of us has a privileged value set.

So perhaps most importantly, we need to remember that the evolutionary path is what we control, not the developmental destination. That's the essence of our daily moral choice, our personal and collective freedom. We could chart a very nasty, dirty, violent, and exploitative path to the singularity. Or with good foresight, accountability, and self-restraint, we could take a much more humanizing course. I am a cautious optimist in that regard.

Christine Peterson recently told me that artificial intelligence represents the one future development about which she has the most apprehension. It can come the closest of any scenario to Bill Joy's "the future that doesn't need us." If the coming of the singularity means the ascendancy of machine intelligence and the end of the human era, shouldn't we all be doing what we can to prevent it from happening?

Ah yes, the Evil Killer Robots scenario.  Some of my very clever transhumanist colleagues worry quite a bit about "Friendly AI." I'm glad to have friends that are carefully exploring this issue, but from my perspective their worries seem both premature and cautiously overstated. I strongly suspect that A.I.s, by virtue of having far greater learning ability than us, will be, must be, far more ethical than us. That is because I consider ethics to be an emergent computational interdependence, a mathematics of morality, a calculus of civilization that is invariably discovered by all complex adaptive systems that function as collectives. And anything worthy of being called intelligent always functions as a collective, including your own brain. Today's cognitive scientists are discovering the evolutionary ethics that have become self-encoded in all known complex living systems, from octopi to orangutans, from guppies to gangsters. For more on this intriguing perspective, see such works as Robert Axelrod's The Evolution of Cooperation, 1985, Matt Ridley's The Origins of Virtue, 1998, and Robert Wright's Non-Zero, 2001.

This optimism isn't enough, of course. We humans had to go through a nasty, violent, and selfish phase before we became today's semi-civilized simians. How do we know computers won't have to do the same thing? I think the answer to this question is that at one level, Peterson's intuitions are probably right. Tomorrows partially-aware robotic systems and A.I.s will have to go through a somewhat unfriendly, dangerous phase of "insect intelligence." As Jeff Goldblum reminded us in David Cronenberg's, The Fly, insects are brutal, they don't compromise, they don't have compassion. Their politics, as E.O. Wilson's Sociobiology, 1975/2000 reminds us, are quite comfortable with brute force. That's a potentially dangerous developmental stage for an A.I. You wouldn't want that kind of A.I. running your ICU, or your defense grid. Or your nanoassembler machines.

But you would very likely let such a system run the robotics in a manufacturing plant, especially if evolutionary systems have proven, as they are already demonstrating today, to be far more powerfully self-improving, self-correcting, and economical than our top down, human-designed software systems. That plant, of course, would be outfitted and embedded within a much larger matrix of technological fire extinguishers, an immune system capable of easily putting out any small fires that might develop.

But with a learning curve that is multi-millionfold faster than ours, I expect that "insect transition" to last weeks or months, not years, for any self-improving electronic evolutionary developmental system. You can be sure these systems will be well watched over by a bevy of A.I. developers, and those few catastrophes that occur to be carefully addressed by our cultural and technological immune systems. It's easy to underestimate the extent and effectiveness of immune systems, they aren't obvious or all that sexy, but they underlie every intelligent system you can name. Computer scientist Diana Gordon-Spearsand others have already organized conferences on "Safe Learning Agents," for example, and we have only just begun to build world-modeling robotics. We're still several decades away from anything self-organizing at the hardware level, anything that could be "intentionally" dangerous.

We also need to remember that humans will be practicing artificial selection on tomorrow's electronic progeny. That is a very powerful tool, not so much for creating complexity, but for pruning it, for ensuring symbiosis. We've had 10,000 years of artificial selection on our dogs and cats. Their brain structures are black boxes to us, and yet we find very few today that will try to grab human babies when the parents are not looking. Again, those few that do are taken care of by immune systems (we don't continue to breed such animals, statistically speaking).

In short, I expect human society will coexist with many decades of very partially aware AI's, beginning some time between 2020-2060, which will give us ample time to select for stable, friendly, and very intimately integrated intelligent partners, for each of us. Hans Moravec (Robot, 1999) has done some of the best writing in this area, but even he sometimes underestimates the importance of the personalization that will be involved. As a species, humanity would not let the singularity occur as rapidly as it will without personally witnessing the accelerating usefulness of A.I. interacting with us in all aspects of our lives, modeling us through our LUI systems, lifecams, and other aspects of the emerging electronic ecology.

By contrast, every scenario of "fast takeoff" or A.I. emergence that I've ever seen, the heroic individual toiling away in the lab at night to create HAL-9000, just doesn't seem to understand the immense cycles of replication, variation, interaction, selection, and convergence in evolutionary development that are always required to create intelligence in both a bottom-up and top-down fashion. Since the 1950s, almost all the really complex technologies we've created have required teams, and there is presently nothing in technology that is as remotely complex as a mammalian brain.

As I mention on my website, I think we are going to have to see massively parallel hardware systems, directed by some type of DNA-equivalent parametric hardware description language, unfolding very large, hardware-encoded neural nets and testing them against digital and real environments in very rapid evolutionary developmental cycles, before we can tune up a semi-intelligent A.I. The transition will likely require many teams of individuals and institutions, integrating bottom-up and top-down approaches, and be primarily a hardware story, and only secondarily a software story, for a number of reasons.

Bill Joy, in Wired12.2003, notes that we can expect a 100X increase (6-7 doublings) in general hardware performance over the next ten years, and a 10X increase in general software (e.g., algorithmic) performance. While certain specialized areas, like computer graphics chips may run faster (or slower), on average this sounds about right. Note the order of magnitude difference in the two domains. Hardware has always outstripped software because, as I've said earlier, it seems to be following a developmental curve that is more human discovered than human created. It is easier to discover latent efficiencies in hardware vs. software "phase space", because the search space is much more directed by the physics of the microcosm. Teuvo Kohonen, one of the pioneers of neural networks, tells me that he doesn't expect the neural network field to come into maturity until most of our nets are implemented in hardware, not software, a condition we are still at least a decade or two away from attaining.

The central problem is an economic one. No computer manufacturer can begin to explore how to create biologically-inspired, massively parallel hardware architectures until our chips stop their magic annual shrinking game and have become maximally-miniaturized (within the dominant manufacturing paradigm) commodities. That isn't expected for at least another 15 years, so we've got a lot of time yet to think about how we want to build these things.

If I'm right, the first versions of really interesting A.I.s will likely emerge on redundant, fault tolerant evolvable hardware "Big Iron" machines that take us back to the 1950s in their form factor. Expect some of these computers to be the size of buildings, tended by vast teams of digital gardeners. Dumbed-down versions of the successful hardware nets will be grafted into our commercial appliances and tools, mini-nets built on a partially reconfigurable architecture, systems that will regularly upgrade themselves over the Net. But even in the multi-millionfold faster electronic environment, a bottom-up process of evolutionary development must still require decades, not days, to grow high-end A.I.. And primarily top-down A.I. designs are just flat wrong, ignorant of how complexity has always emerged in physical systems. Even all of human science, which some consider the quintessential example of a rationally-guided architecture, has been far more an inductive, serendipitous affair than a top-down, deductive one, as James Burke (Connections, 1995) delights in reminding us.

So, when one of the first generation laundry folding robots in 2030 folds your cat by accident, we'll learn a tremendous amount about how rapidly self-correcting these systems are, how quickly, with minor top-down controls and internet updates, we can help them to improve their increasingly bottom-up created brains. Unlike today's still-stupid cars, for example, which currently participate in 40,000 American fatalities every year, tomorrows LUI-equipped, collision avoiding, autopiloting vehicles will be increasingly human friendly and human protecting every year. This encoded intelligence, this ability to ensure increasingly desirable outcomes, is what makes a Segway so fundamentally different from a bicycle. Segway V, if it arrives, would put out a robotic hand or an airbag to protect you from an unexpected fall. So it will be with your PDA of 2050, but in a far more generalized sense.

In a related point, I also wouldn't worry too much about the loss of our humanity to the machines. Evolution has shown that good ideas always get rediscovered. The eye, for example, was discovered at least thirty times by some otherwise very divergent genetic pathways. As Simon Conway Morris eloquently argues (Life's Solution, 2003) every single aspect of our human-ness that we prize has already been independently emulated to some degree, by the various "nonhuman" species we find on this planet. Octopi are so smart, for example, that they build houses, and learn complex behavior (e.g., jar-opening) from each other even when kept in adjacent aquaria.

This leads us to a somewhat startling realization. Even if, in the most abominably unlikely of scenarios, all of humanity were snuffed out by a rogue A.I., from a developmentalist perspective it seems overwhelmingly likely that good A.I.s would soon emerge to recreate us. Probably not in the "Christian rapture" scenario envisioned by transhumanist Frank Tipler in The Physics of Immortality, 1997, but certainly our informational essence, all that we commonly hold dear about ourselves.

How can we even suspect this? Humanity today is doing everything it can to unearth all that came before us. It is in the nature of all intelligence to want to deeply know its lineage, not just from our perspective, but from the perspective of the prior systems. If the world is based on physical causes, then in order to truly know one understands the world, one must truly know, and be able to understand at the deepest level, the systems in which one is embedded, the systems from which one has emerged, in a continuum of developmental change. The past is always far more computationally tractable than what lies ahead.

That curiosity is a beautiful thing, as it holds us all tightly interdependent, one common weave of the spacetime fabric, so to speak.

That's why we are already spending tens of millions of dollars a year trying to model the way bacteria work, trying to predict, eventually in real-time, everything they do before they even do it, so that we know we truly understand them. That's why emergent A.I. will do the same thing to us, permeating our bodies and brains with its nanosensor grids, to be sure it fully understands its heritage. Only then will we be ready to make the final transition from the flesh.

Also on your website, I read that the singularity will occur within the next 40 to 120 years.  Isn't that kind of broad range? What's your best guess on when it will occur?

I find that those making singularity predictions can be usefully divided into three camps: those predicting near term (now to 2029), mid-term (2030-2080), and longer term (2081-2150+) emergence of a generalized greater-than-human intelligence. Each group has somewhat different demographics, which may be interesting from an anthropological perspective.

I think the range is so broad because the future is inherently unpredictable and under our influence. It is also true that none of us has yet developed a popular set of quantitative methodologies for thinking rigorously about these things. Very little money or attention has been given to them. If you'd like to send a donation to our organization to help in that regard, let us know.

From my website: "Most estimates in the singularity discussion community, intuitive as they all are at this early stage, project a generalized human-surpassing machine intelligence emerging circa 2040, give or take approximately 20 years. This puts many singularitarians on the 2020 end, and several of the older, more conservative prognosticators on the 2060 end. My own early guesstimation leads me to expect a circa 2060 singularity, though my confidence interval is wide (20 years per standard deviation) as I believe the arrival depends, within a human generation or two either way, on the choices we make. To significantly accelerate its arrival, most important may be our political, economic, social, and personal choices in regard to science and technology education, innovation, research, and development. To significantly delay its arrival, we have many more possibilities, none of which I need go into here."

Using this simple model, I feel 68 percent confident that it will happen between 2040 and 2080, and 95 percent confident it will occur between 2020 and 2100. But again, these are only rough estimates at this stage. A very large number of mostly bottom-up and secondarily top-down innovations in hardware, and to a lesser degree, software, will apparently be needed. As we approach this fantastic challenge, we will certainly also continue to gain major insights from top-down theory and bottom-up experimentation in such fields as neuroscience, cognitive science, and evolutionary developmental biology, as well as numerous other domains I discuss under degree programs for singularity studies.

Do you take the position that we can make no meaningful statements about what may happen after the singularity occurs? Or, if we can at least speculate about it, what is your best guess as to what life will be like in a post-singularity world?

As I've described above, I think that there are a number of simple, global statements we can make about the developmental course that the universe must take after the singularity emerges. It seems a very good bet, for example, that tomorrow's technological intelligences will be fully constrained by the laws of physics in this universe, both the majority that I feel are known and that much smaller set that remains undiscovered. That constraint already tells us volumes about what they'll be doing in their exploration of our increasingly informationally and energetically barren universe.

I think Stephen Weinberg (Dreams of a Final Theory, 1993) is right, that we are within just a few decades (or perhaps generations) of understanding all the functional elements at the bottom end of this finite universe. And I think Lee Smolin and the string and M-theorists are right (Three Roads to Quantum Gravity, 2002), that we are close to an understanding of the large scale structure of spacetime, and to unifying it with the quantum domain. All that will remain at that point, as Ian Stewart and Paul Davies would say, is what's left in the middle, not the zone of the very large, or of the very small, but of the "very complex," the unique combinations that accelerating computational systems can construct locally out of the universal rules and forces that we are stuck with. I strongly suspect that tomorrow's A.I.s will be unable to generally reverse entropy within this universe. They'll likely find it impossible to engage in time travel within this universe. That goes for many of the other extreme and causally illogical things we've occasionally heard from mathematical physicists and sci-fi authors with active imaginations.

As I've mentioned before, I think they'll be constrained to be ethical, to be information seekers, and to rapidly enter a black hole transition (the developmental singularity hypothesis). But this tells us little about the evolutionary uniqueness of their path, other than that it will have intricacies within it that we cannot comprehend.

We'll also have plenty of decades to see if persuasive computing, personality capture and the humanizing AI scenario emerges, as described earlier, long before the singularity occurs. If machine intelligence does develop along the lines predicted, I think it's pretty clear that when the A.I. arrives, they will be we, just natural extensions of ourselves. In that world, as Hans Moravec was perhaps the first to remind us (Mind Children, 1988), it seems very likely that all local intelligence will jump to a postbiological domain. Soon after that, I suspect, we may transition to a postuniversal domain.

That seems a very natural transition, to me.

You’ve placed a good deal of emphasis on academia, specifically on degree programs related to the study of the singularity.  Why is this so important?

To develop any kind of foresight, we need to study. If the biological sciences have taught us anything in the last century, its that the difference between evolution and development in living systems is one of the last great mysteries. With careful effort, we will tease out that special, simple, developmental component, and understand how development uses evolution in all complex systems.

I believe developmental insights in a wide range of fields will revolutionize the study of accelerating change. We need an Einstein of Information Theory, someone who can place what Damien Broderick (The Spike, 2002) and I call singularity studies on a broad academic foundation, and attract many bright minds to the study of the amazing transition ahead. That won't be me, as I don't have all the quantitative and qualitative skills that I think will be necessary. But I can play Galileo to someone else's Newton.

Academia isn't the only solution to charting a safe singularity, but in partnership with government, business, and dedicated individuals it is one of the important pieces of the puzzle.

When I heard you speak recently, I was surprised by what you had to say on the question of whether we’re alone in the universe.  In the end, do you think that our universe will be occupied by any intelligence other than human intelligence or its descendants?

As I've mentioned earlier, I think all universal intelligence follows a path of transcension, not expansion. This has to do with such issues as the nature of communication in complexity construction (two-way, with feedback, is relentlessly preferred), the large scale structure of the universe (which puts huge space buffers between intelligences) and the small scale structure of the universe (which rewards rapid compression of the matter, energy, space, and time necessary to do any computation).

Fortunately, this perspective is quite falsifiable by future advances with SETI. If I'm right, in just a few more decades as the Moore's law-driven sensitivity of our sensor systems continues its exponential growth, we'll begin discovering "radio fossils" in the night sky, emissions of very weak electromagnetic signals (radio, TV, etc.) unintentionally emitted from the older intelligence-bearing planets whose past developmental record should already be detectable in our galaxy.

We began sending such signals out to space with the birth of powerful radio in the 1920's. If we assume our civilization enters a developmental singularity circa 2150, after which transmissions cease, this allows an average of 200 years of transmission time, out of a stellar lifetime of 12 billion years. Seth Shostak has estimated 400 billion sunlike stars in our galaxy, and we will assume half of these, 200 billion, harbor Earth-like planets. Two-thirds of these planets are older than our Earth, closer to the galactic core, and so further along in their technological development than we are today. That gives (200/12 billion) * 200 billion * 2/3 = 2,200 radio fossils patiently waiting to be discovered in the night sky. I've described this further in a short 2002 Journal of Evolution and Technology article on the Fermi Paradox, so I refer you to that if you'd like to further explore these interesting ideas. 

Once our antennas are powerful enough to detect unintentional EM emissions from the closest few million stars, something that Frank Drake tells me is almost possible now with the closest of our neighboring stars, we'll begin to discover these unmistakable signatures of nonrandom intelligence. We will also notice that every year, a small fraction (roughly 1/200th) of these radio fossils suddenly stop sending signals. Like us, these will be civilizations whose science invariably discovers that the developmental future of universal intelligence is not outer space, but inner space.

That's the destiny of species.


[ Thanks to Elen Burton, Jose Cordiero, Ryan Elisei, Michael Hartl,Neil Jacobstein, John Peterson, Chris Phoenix, Wayne Radinsky, and Wendy Schultz for valuable comments and ideas. ]

John Smart is a developmental systems theorist who studies science and technological culture with an emphasis on accelerating change, computational autonomy and a topic known in futurist circles as the technological singularity. He is chairman of the nonprofit Institute for Accelerating Change (IAC) whose websites (, aim to help individuals better understand and manage the physical and computational phenomenon of accelerating change. John lives in Los Angeles, CA and may be reached at feedback{at}

If you have an interest in a multidisciplinary understanding of accelerating change, you are invited to join IAC's free quarterly newsletter, Accelerating Times.

UPDATE: John Smart has published a nicely organized and illustrated version of this interview on the Institute for Accelerating Change website.

Also see Speaking of the Future with...

Rand Simberg | Nina Paley | Phil Bowermaster | Michael Anissimov | Ramona | Robert Zubrin | Alex Lightman | Aubrey de Grey

Posted by Phil at 04:45 PM | Comments (1) | TrackBack

December 03, 2003

ITF #100

In the Future...

...we'll wonder why a group who was so prepared seemed to have so much trouble getting things done.

Posted by Phil at 08:45 AM | Comments (12) | TrackBack

A Golden Opportunity

Every Wednesday, I publish a new chapter of Stillness, which has been hailed as "the finest novel of its kind ever to appear in a blog named The Speculist." If you've missed the first 15 chapters, fear not. They are listed in a handy hyperlinked Table of Contents, below.

You know, a fun way to approach it might be to start at Chapter 1 and read through Chapter 16.

Just a thought.

Oops, sorry. That Table of Contents has moved.

Posted by Phil at 08:40 AM | Comments (0) | TrackBack

How Long Have I Got, Doc?

Discover Magazine has an article about the limits of life extension:

A century ago, most Americans lived to be about 50. Today people over 100 make up the fastest-growing segment of the population. As some researchers bet that children born today will live to be 150, others say there is no upward limit on longevity

(Full article requires paid subscription.)

Two doctors are offering what they expect to be a $500 million reward in the year 2150 to the first person who can prove that he or she has lived 150 years.

I predict that there will be a lot of folks eligible to collect that prize. I also predict that $500 million won't buy you that much in 2150.


Posted by Phil at 06:15 AM | Comments (2) | TrackBack

Biggest Prime

The biggest prime number yet has been found. It's 6,320,430 digits long.

One thing I'm unclear on from reading the article is whether the biggest one found means it's the next biggest one up from the previous biggest? Was this guy just looking for the next one, or was he looking for the biggest one he could find?

Posted by Phil at 06:02 AM | Comments (4) | TrackBack

Carnival of the Vanities #63 up, over at Begging to Differ. Check it out!

Posted by Phil at 05:54 AM | Comments (0) | TrackBack

About Those Seven Questions

Seven Questions About the Future with John Smart will be published tomorrow along with our Speaking of the Future interview with him.

I know. I can't wait, either!

Posted by Phil at 05:52 AM | Comments (0) | TrackBack

December 02, 2003

It Takes a Little Effort

In the ongoing contest to provide the most self-referentially paradoxical comment ever, I'm pleased to announce that Emmanuel has taken a strong lead. (Scroll all the way down to see it.)

Meanwhile, Prof. Hall remains the maestro of consistent self-referentiality.

Posted by Phil at 11:46 AM | Comments (3) | TrackBack

ITF #99

In the Future...

...the Easy-Bake Home Cloning Kit will outsell this item.

Futurist: El Jefe Grande

Posted by Phil at 11:36 AM | Comments (0) | TrackBack

December 01, 2003

For the Transhumanist on Your Shopping List

In the spirit of the season, and in keeping with our accelerating change theme for the week, here's the complete list of books referenced in this week's upcoming interviews with John Smart.

Happy reading.

UPDATE: By popular demand (see comments) I have added a selection for that hardest-to-shop-for of all demographics: the metrosexual transhumanist. Please note that one of the titles shown above was not specifically mentioned in my interview with John Smart.

Posted by Phil at 03:29 PM | Comments (4) | TrackBack

This Week 12/01/03

This week is going to be huge.

Foremost, John Smart, Director of the Institute for Accelerating Change, will be our special guest, answering the Seven Questions and participating in a fascinating interview for this week's Speaking of the Future. Hold on to your hats, people.

We've already had the big news today about developments in the Great Assembler Debate.

Plus, In the Future... turns 100 this week. Let's look at the whole week as it will unfold:

The Assembler debate heats up. Plus, holiday book-shopping ideas for the transhumanist on your Christmas list.

John Smart answers Seven Questions About the Future.

Stillness, Chapter 16. Emmett gets a little help in pondering his choice.

Speaking of the Future with John Smart.

Speculist University. We have some catching up today after the holiday break.

Future Round-up. A review of the first 100 In the Future... predictions.

And throughout the week we'll be blogging developments in nanotechnology, artificial intelligence, space exploration, and other future-impacting areas.

Posted by Phil at 02:08 PM | Comments (0) | TrackBack

Assembler Debate Heats Up

Eric Drexler and Richard Smalley are finally having it out one-on-one over the issue of nano assemblers in the current issue of Chemical and Engineering News. Since the nanotechnology bill has passed both houses of Congress and is awaiting the President's signature, and in light of the fact that there's a certain amount of confusion as to how far the bill is going to take us in the direction of molecular manufacturing, this debate couldn't be more timely.

The difference of opinion between these two nanotechnology luminaries originated in 2001 when Smalley blasted (via misrepresentation) some basic molecular manufacturing concepts in an article in Scientific American. Drexler responded by issuing an open letter to Smalley challenging him to debate these issues. The crux of their original disagreement was whether nano-assemblers as envisioned by Drexler would require the use of tiny pincers to move atoms around. As explained by Smalley, any such mechanical pincers or fingers that could be built would be either too "fat" or too "sticky" ever to work. Drexler countered that these "Smalley Fingers" (as he dubbed them) are a straw man that have nothing to do with the basic concepts of molecular manufacturing that he has developed:

I have a twenty year history of technical publications in this area and consistently describe systems quite unlike the straw man you attack. My proposal is, and always has been, to guide the chemical synthesis of complex structures by mechanically positioning reactive molecules, not by manipulating individual atoms. This proposal has been defended successfully again and again, in journal articles, in my MIT doctoral thesis, and before scientific audiences around the world. It rests on well-established physical principles.

In the new debate, Smalley immediately concedes this point and even takes to calling these things "Smalley Fingers" himself.

In the infinity of all conceivable ideas for self-assemblers, we agree that at least this computer-controlled "Smalley finger" type of assembler tool will never work.

Drexler goes on to explain the kind of assembler environment he has in mind:

These nanofactories contain no enzymes, no living cells, no swarms of roaming, replicating nanobots. Instead, they use computers for digitally precise control, conveyors for parts transport, and positioning devices of assorted sizes to assemble small parts into larger parts, building macroscopic products. The smallest devices position molecular parts to assemble structures through mechanosynthesis--'machine-phase' chemistry.

Direct positional control of reactants is both achievable and revolutionary; talk of additional, impossible control has been a distraction.

Smalley, however, just can't buy in:

I see you have now walked out of the room where I had led you to talk about real chemistry, and you are now back in your mechanical world. I am sorry we have ended up like this. For a moment I thought we were making progress.

You still do not appear to understand the impact of my short piece in Scientific American. Much like you can't make a boy and a girl fall in love with each other simply by pushing them together, you cannot make precise chemistry occur as desired between two molecular objects with simple mechanical motion along a few degrees of freedom in the assembler-fixed frame of reference. Chemistry, like love, is more subtle than that. You need to guide the reactants down a particular reaction coordinate, and this coordinate treads through a many-dimensional hyperspace.

Smalley seems inordinately fond of his "love" analogy for chemistry. It's hard to imagine a scientist taking a more obstructionist position toward his own field. You can't understand this. It's like boy meets girl. It's the birds and the bees. It's magic.

Ralph Merkle counters in Foresight's official press release on the debate:

Ab initio quantum chemistry calculations don't involve love, or mushing, or pretending. For example, a carbon-deposition reaction which a colleague and I studied using standard quantum chemistry methods moves a carbene tool along a barrier-free path to insert a reactive carbon atom into a dimer on a diamond (100) surface. The tool is then twisted 90 degrees, breaking an internal pi bond, and pulled away to break the remaining sigma bond, leaving a single carbon atom bonded to the dimer on the surface." Merkle adds, "Further computational chemistry research into fundamental mechanosynthetic reactions should be an integral component of any national nanotechnology program. Smalley's metaphors merely cloud the issues.

Smalley concludes with this disturbing story:

A few weeks ago I gave a talk on nanotechnology and energy titled "Be a Scientist, Save the World" to about 700 middle and high school students in the Spring Branch ISD, a large public school system here in the Houston area. Leading up to my visit, the students were asked to write an essay on "Why I Am a Nanogeek." Hundreds responded, and I had the privilege of reading the top 30 essays, picking my favorite five. Of the essays I read, nearly half assumed that self-replicating nanobots were possible, and most were deeply worried about what would happen in their future as these nanobots spread around the world. I did what I could to allay their fears, but there is no question that many of these youngsters have been told a bedtime story that is deeply troubling.

You and people around you have scared our children. I don't expect you to stop, but I hope others in the chemical community will join with me in turning on the light, and showing our children that, while our future in the real world will be challenging and there are real risks, there will be no such monster as the self-replicating mechanical nanobot of your dreams.

Presumably, Smalley will be in Alaska next week telling children that they don't need to worry about being eaten by polar bears because there are no polar bears. The question of whether polar bears actually exist is secondary; the main point is that children shouldn't be frightened. It apparently hasn't occurred to Smalley that this "frightening bed-time story" is as gross a distortion of any message ever to come out of Foresight as were his fat and sticky fingers.

But then, that wouldn't occur to him, would it?

Howard Lovy provides an excellent analysis of this debate, which he concludes with cautionary note about why a correct assessment of feasibility can be so important:

What if the scientists who believed atomic power was physically impossible had the ear of the U.S. government during World War II? If you think that's a ridiculous thought, go look up Leo Szilard. He had some crazy idea about creating a nuclear chain reaction so powerful that it could change the balance of power in the world in an instant. He said such harnessing of atomic energy was so dangerous that society should figure out a way to control it. The leading physicists of his day, including Enrico Fermi, said such a thing was not possible. It took a letter from former atom-splitting skeptic Albert Einstein, who by 1939 had come around to the possibilities of uranium, to finally convince President Roosevelt to toss some money Szilard's way for a feasibility study on atomic chain reactions.

The rest, as they say, is history.

Perhaps as the debate continues here at home, we should keep a watchful eye on other powers who are already actively working to build the first assembler—whether we decide it's possible or not.

UPDATE: Ray Kurzweil provides a thorough anlaysis of the debate (inlcuding an excellent background on the field of nanotechnology) and Glenn Reynolds has a good round-up of links.

Posted by Phil at 07:18 AM | Comments (1) | TrackBack

About Stillness

YET ANOTHER UPDATE (I thought I would put one up top rather than at the bottom for once.) Stillness, and all of the content published on the Speculist, is now subject to a creative commons license. Feel free to copy and distribute Stillness, or write a new chapter, or whatever, all subject to the stipulations outlined on the deed.

Stillness is a novel, a work of speculative fiction. It attempts to be many things: a thriller, a love story, a metaphysical meditation, a farce, all set against the backdrop of what I believe is an original way of ending the world.We'll see.

If it's been done before, I'm sure some of you will let me know.

I started publishing Stillness the same week I launched The Speculist. I'm publishing the novel in serial form, running a new chapter every week. So far I've received some very encouraging feedback. Thank you to those who have sent e-mails or written comments, especially Virginia Warren, who has been helping me out with a few typographical irregularities and Alex Alemi, who lets me know when a chapter hasn't been published on time.

My original plan was to publish 75% of the novel over the next few months. Then I would revise what I had published here and go out and land a lucrative contract for the entire book. I would leverage myself into old media (book publishing) via new media (the blog). But I've been thinking about it, and that idea stinks.

No, not because I have a problem "managing expectations." What does that term mean, anyway?

It stinks because it's only right that I share the entire novel with you, good readers. Many of you have been following fathfully along, and I think even more of you would give the book a try if you knew you were getting the whole thing. So it's decided. I'm publishing the whole novel online.

I was afraid before that I might cut into the book's commercial viability by publishing it here. But I see now that that's a mistake. If people enjoy reading the book online, the published version can serve as a "director's cut" that extends the story.

Plus, who knows? Maybe there will be sequels.

UPDATE: Rather than running a new one of these entries every week, I've decided to start using this one over and over. So feel free to comment, if you wish.

ANOTHER UPDATE: Per the comments, below, Stillness is going to be published under a Creative Commons License. More details to come.


by Philip Bowermaster

Part I

Chapter 1, in which Reuben sees lights.

Chapter 2, in which Sergei gives advice.

Chapter 3, in which Ksenia looks at cars.

Chapter 4, in which Reuben falls.

Chapter 5, in which Reuben contends.

Chapter 6, in which Reuben recovers.

Chapter 7, in which Sergei explains some things.

Chapter 8, in which Betty explains the rest.

Chapter 9, in which Father Alexy saves the day.

Chapter 10, in which the old man speaks.

Chapter 11, in which Reuben obliges.

Part II

Chapter 12, in which Emmett goes to work.

Chapter 13, in which Frank has some news.

Chapter 14, in which Peggy opens a box.

Chapter 15, in which Emmett becomes confused.

Chapter 16, in which Rick spells things out.

Chapter 17, in which two strangers arrive.

Part III

Chapter 18, in which Celia meets Corey.

Chapter 19, in which Grace wins a game.

Chapter 20, in which Celia remembers.

Chapter 21, in which Corey wishes.

Chapter 22, in which Todd hugs back.

Chapter 23, in which an argument is settled.

Chapter 24, in which Estelle calls for help.

Chapter 25, in which Grace gets an idea.

Chapter 26, in which Corey awakens.

Part IV

Chapter 27, in which Reuben goes forth.

Chapter 28, in which Reuben gets lost.

Chapter 29, in which Hamilton lends his coat.

Chapter 30, in which Reuben plays a new game.

Chapter 31, in which Markku takes a turn.

Chapter 32, in which Sergei has some questions.

Chapter 33, in which Reuben reconsiders his past.

Chapter 34, in which Iskandar deals some cards.

Chapter 35, in which magic is discussed.

Chapter 36, in which Daphne sets terms.

Chapter 37, in which Altheus issues a warning.

Posted by Phil at 06:40 AM | Comments (8) | TrackBack