June 14, 2004

Tough Questions

Steven Den Beste has published a couple of thought-provoking essays recently on the topics of consciousness and identity. He raises a number of stumpers, of which I found three particularly interesting:

At what point is it accurate to say that a victim of Alzheimer's disease has died?

Is there really such a thing as identity or is it an illusion?

What are the ethics of owning an intelligent machine?

The first question has to do with the death of the "self" which a degenerative disease slowly brings about. Ultimately the damage to the brain is so profound that the person we knew is lost. Den Beste points out that this is what occurred with President Reagan: although his heart stopped beating last week, the man who led our country was gone long before that.

Should there be a definition of "death" that includes the loss of identity, the loss of self that occurs with a degenerative disease? I think not.

In other eras, it might have made sense to come up with such a definition. (I say might.) But today? I don't think so. Things are changing too rapidly. Not only are we learning more and more about what causes Alzheimer's disease — and, by extension, what might be done to prevent it — real strides are being made towards developing effective treatments for the disease. And even bigger breakthroughs are on the horizon. We may not be that far from finding a way to reverse "irreversible" brain damage. So the great danger of a degenerative definition of death is this: we might write someone today off as lost forever only to find in a couple of years that the person we knew can be restored to us, after all.

If President Reagan had been 10 years younger, and his fight with Alzheimer's was staring today rather than a decade ago...who knows?

[Steven's question also brings to mind the ongoing cryonics debate, about which I would have written something had Rand Simberg not beaten me to it. Rand relates the unsettling story of a man with an inoperable brain tumor who wanted to be put in cryonic suspension. Ironically, a court turned down his request because euthanasia is illegal. But the man wasn't trying to kill himself; he was trying to save himself: that is, he was trying to avoid having a tumor grind into mush the brain tissue that defines who he is.]

On the second question, whether there really is a "there" there where individual human identity is concerned, Den Beste writes as follows:

Do I exist? In one sense, of course I do. Cogito Ergo Sum. The fact that I'm able to ask that question proves that the answer is "yes".

But the answer to the question depends on how the question is stressed. Cogito Ergo Sum says "yes" to "Do I exist?" It doesn't help us with the question "Do I exist?"

There's something that exists here. I accept that the universe is real, and that my body is part of it, and that the brain contained within that body is thinking these thoughts and controlling the fingers which type the words you are reading.

The real question is whether that organism's presumption of having a unique and characteristic identity is a fallacy, perhaps even a conceit, one based on incorrect assumptions or a faulty supposition that the subjective experience of life is a true reflection of the nature of life.

Cogito ergo sum does not answer these kinds of questions. Yes, I do think and I have a subjective experience of thinking. That proves that this organism's brain exists and operates in certain ways. But existence and identity are not the same. I exist, but I can't be sure that I exist.

Not that I necessarily have a better one to offer, but I think Den Beste's definition of identity is flawed. He hinges the notion of identity on whether it is unique and characteristic. Let's start with the easier one — characteristic. Although I'm convinced that I exist, I find that the person who I believe to be real is capable of tremendous inconsistency. I have a lot more in common with friends and acquaintances with whom I have contact in the present than I do with myself in the past. Phil-of-the-past and I are, in a very real sense, two different people. What we have in common is some memories (although, lucky me, I have many more than he does) and a subjective experience of things happening in sequence around a single first-person point of reference. Absent discussion of a metaphysical soul — which Steven rejects — that subjective experience of one thing after another from that particular point of view is me.

Unique? Why would it have to be unique? If I'm just a clone who has had Phil Bowermaster's memories grafted onto my brain and I really only just woke up this morning — well, first off, what a waste of perfectly good cloning techniques. And what did they do with the real me? But anyway, I'm still me. That is, I'm still this sequence of first-person singular experiences. I may have never really had a bunch of them, but that's true even if I'm not a clone (or a computer simulation or a brain transplant or what have you). Memory is notoriously unreliable. The point is, here I am. It doesn't matter if I am a characteristic or unique entity. I think therefore, I —

Hold it. There's an easier way of putting it.

I am, therefore I am.

Finally, on the issue of owning an intelligent being — yes, it is definitely immoral to think in those terms. I don't think that there will be an effective way to program a computer that is truly intelligent to want to be owned. Nor do I think such programming would make it okay to do so.

I doubt it will be much of an issue, however, because I don't think that homo sapiens will be calling the shots for very long after computers reach human level intelligence. Those of us who accept the notion that a technological or developmental singularity is in the offing tend to expect that any ethical issues surrounding how we're supposed to treat computers will be solved for us...by the computers. Steven uses the analogy of dogs:

Dogs represent something of a fringe problem here, so let me deal with that. We generally accept that it's OK to own dogs, and there's no doubt whatever that they like being owned by us. The question is whether dogs actually understand the relationship the same way we do, and view themselves as property and us as owners.

It isn't clear that it even means anything to ask such questions. Even if it does, it is by no means clear that dogs are sophisticated enough to understand concepts like "property" and "ownership". But to the extent that we are able to consider the way dogs think about the relationship, the most likely answer is that they do not see it in those terms.

The symbiosis between dogs and humans appears to have come about because each species came close to fitting into a role the other already knew about. The relationship was possible because those mental roles interlocked reasonably nicely. To humans, dogs come close to fitting into the role of "child". To dogs, human masters seemed to be the "alpha" members of the pack. (It's noteworthy that domestic dogs are descended from wild canines with strong pack behaviors, but not from canines like foxes which do not run in packs.) That means the whole partnership has from the first been based on a really big misunderstanding.

A misunderstanding is one way of putting it. Another way would be to say that both humans and dogs have adapted their capability for one kind of relationship into a completely different cross-species relationship with benefits to both groups. This ability to adapt and redefine relationships will probably play a big role in what happens between us an our electronic progeny.

In The Age of Spiritual Machines, Ray Kurzweil draws out a series of scenarios that show how this development might take place. The woman who leads us through the next few generations of machine evolution starts out describing "her" AI as a very useful piece of equipment: the ultimate PDA. After a few years, the AI becomes much more than that: her right-hand man, her faithful confidant. Ultimately, the AI becomes her life partner, helping her to augment and expand what she is.

Artificial intelligence may evolve from tools to friends in a very short period of time. From there, we might evolve with them, as Kurzweil suggested. But if they blast past us as quickly as some predict that they will, ultimately it will be the computers deciding whether or not it's ethical to own humans.

In the end, they might keep us around as their beloved pets. In which case we can only hope that they treat us as well as we treat our pets.

Posted by Phil at June 14, 2004 08:11 AM | TrackBack

1. At what point is it accurate to say that a victim of Alzheimer's disease has died?

If nothing can be done to treat the disease, there is wisdom in allowing things to progress in "God's good time" as was suggested at Reagan's burial.

I agree with Phil - we just don't know what advances the future holds.

2. Is there really such a thing as identity or is it an illusion?

I'm going with the Popeye formulation:

"I yam what I yam."

Seriously, if I don't exist, how can my mistaken impression that I exist be explained? If something exists enough to be mistaken about that, why shouldn't it be recognized as an existing entity with subjective consciousness?

To put it another way, if individuality is a conceit, who exactly is being hurt by that conceit? Nobody. But failing to recognize the worth of others has brought untold suffering to the world.

Which leads us to...

3. What are the ethics of owning an intelligent machine?

This gets murkier because unlike people, computers are made (at the present time) for the purpose of serving people. If we err on the side of recognizing subjective consciousness in a particularly clever machine where none exists, that "conceit" may have cost. Our own ethics may rob us of a valuable tool.

I realize that I just contradicted myself. If Cogito Ergo Sum did I add to or take away from my existence?


Posted by: Stephen Gordon at June 14, 2004 02:10 PM

I went to Mary's dance recital Saturday night. It was a four hour extravaganza of ballet, jazz, tap, Irish, and lyrical dancing. These were youngsters aged 3-18. Except for the little tykes, who drew giggles, the other dancers were amazing. It was one of those moments when one appreciates the mystery of being human. We are more than the sum (pun intended) of our intelligence + consciousness. Even on the other side of the singularity, the AI will not have those God-imbued attributes.

Posted by: Kathy at June 14, 2004 09:01 PM

Even on the other side of the singularity, the AI will not have those God-imbued attributes.

That's interesting. And when exactly did the Almighty confide in you as to what attributes he will and will not be imbuing AIs?


Posted by: Phil at June 14, 2004 09:52 PM


Should that have read: "...will and will not be imbuing AIs with?" Now you know why I avoid complex sentence structure.

Posted by: Phil at June 14, 2004 09:57 PM

Actually, pedants would say that it should be "what attributes with which he will or will not be imbuing AIs..."

Posted by: Rand Simberg at June 15, 2004 10:06 AM

Actually, pedants would say...

Well, then I guess it's a good thing there aren't any of them around, eh Rand?

Posted by: Phil at June 15, 2004 10:16 AM

Re: #2.

Not to get all postmodernisty, but the conception of an autonomous, 'unique' 'individual' - and thus Steven's question - is 'totally' rooted in the Platonic dualisms (mind/body, self/other, thought/action) that have undergirded and problematized Western thought for millennia. While one could provide counter-examples from 'non-Western' thought -

I prefer to go to the French phenomenlogist philosopher Maurice Merleau-Ponty, who totally rocked my world in college. By rooting his philosophy in the phenomena of the human experience, his view of experience is fundamentally intersubjective - the world, and the people in it, cause me to look, move, and speak at least as much as I consciously 'will' myself to do so, and the desires that my glance, my gestures, and my speech express are produced in me by the world as much as I create them myself. When we talk, I don't think in sentences about what I am going to say and then say it; I merely have 'something' that I want to say, and the expression of my desire is practically pulled out of me.

Which is not at all to say that the only alternative to a unique, autonomous conception of the self is a self that is completely determined and dependent on the world around it - another dualism! Merleau-Ponty attempts, and in my view succeeds, in transcending this. Our 'flesh' (his word for 'us', to avoid the mind/body distinction) is a unified apparatus in continual interaction with a world that shapes us as it is shaped by us, and in the process our flesh develops a certain style - or certain styles in certain contexts, more accurately - of looking, moving, talking, characteristic patterns of interaction as unique yet ultimately indefinable as the distinctive brushstroke of an artist or an author's style of writing.

Kind of hard to summarize, but you get the idea, maybe. In essence, what 'I' am, my 'identity', is not the isolated, autonomous 'self' nor the golem filled with the content of the world, but instead the intersection or interaction - for Merleau-Ponty the *intertwining*, the "chiasm" - between our flesh and the 'flesh' that is the world.

This is not really an answer to Steven's question entirely... but, still, rad!!

Posted by: John Atkinson at June 15, 2004 11:06 AM

The identity question usually dances around the illusion of individuality. A useful exercise for qualitatively defining "you" as apart from "us" is to make a list of things that are uniquely "yours". One quickly finds that the body isn't on the list, since it was given by our parents and shaped somewhat by our environment. Similarly, most of the thoughts we think are hardly original, nor is the language we frame our thoughts in. Most of the stuff we think about came from outside the bag of molecules that describes our bodies. After subtracting out most of our physical and mental aspects, what's left? Not much. Therefore, we are predominantly a manifestation of distributed phenomena - genes and thoughts that resonate in our particular lump of flesh, and which we share back into the sea of life that surrounds us. So, when one of us suffers from Alzheimer's and the memories stored in those molecules degrades, do the echoes of those memories continue to resonate through our words and actions enough that one might consider them still active, still part of us?

Posted by: Gregory Bloom at June 16, 2004 04:58 PM

#2: I think Gregory has it. The choices you actuate in this physical Universe around us resonate with all the other identities around us. We can't exist as we do right now without the static values of society as latch points, so if you attempt to define yourself outside of that construct then you're never going to capture the entire essence of what it is to have an identity. If every action you take affects the others around you then it is those perturbations you've caused in the societal network which define what it is to be you.

#3. Given that a singularity type event will occur if the progress of AI and computers continues on it's current path, you are quite right in saying that their intelligence will be to ours like ours is to dogs. This tipping point seems inescapable, something which many people will probably fear. Nonetheless, if it occurs our only hope for survival is having the right person teaching an open set of values to that very first AI. If we instill in our future collective creation a sense of morals that can appreciate all the positive things that humanity has done as well as the negative things, then that AI will teach it's 'children' morals which would hopefully include an intrinsic value for human existence.

We just gotta make sure that first one is brought up right. ;)

Posted by: ChefQuix at June 18, 2004 12:22 AM

3918 Get your online poker fix at http://www.onlinepoker-dot.com

Posted by: online poker at August 15, 2004 05:39 PM
Post a comment

Remember personal info?