August 06, 2004

The Council, #4

Patricia Bedford gasped for air. Her skin tingled. As the room re-solidified around her, someone grasped her shoulder. She looked up at the intruder.

“Here, you must drink this,” he said, forcing a small plastic pouch into her hand before turning to Randall Drayton and guiding him to a chair.

“I didn’t believe it,” Drayton said. “I didn’t think it was really possible!” He gulped the contents of his pouch. “Patricia, drink!”

Patricia sputtered in protest, but the man, whose name she didn’t even know, was guiding her hand to her mouth.

“It’s an electrolyte solution,” he explained. “It will help you adjust to the effects of teleporting."

“Teleporting?” What was he talking about? Patricia looked around. They were still standing in Dr. Drayton’s kitchen. There was the dinette with its faux granite surface, the rustic, white spindle-backed chairs, and the window where the bird had been sitting on the sill until the intruder startled it…

The window… It was hard to see through its thick, dark glass. Outside, a distorted, barren landscape stretched like a forgotten shoreline meeting a black sea of sky. A hand’s breadth above the horizon hung a blue-green, cloud-laced orb, huge and impossible to fathom, its lower hemisphere submerged in the bottomless darkness.

“Why don’t you sit down, Patricia?” the stranger said. “You’ll be more comfortable.”

Numb, Patricia reached for a chair that looked just like the one she’d used a few moments before. That chair was now almost 385,000 kilometers away. On Earth. She drank from the pouch. “Why? Why the moon?” she asked her captor when her tongue was clear of the salty, metallic-tasting gel. “Why recreate Dr. Drayton’s house?”

“I can’t tell you why we’re keeping you on the moon,” the man answered. “That wasn’t my call. As for the house, why not?” He spread his hands. “If I must keep you here, why not make the surroundings comfortable?” He looked toward the window. “Sorry about the view. Even we lack the resources to make the moon look like Dr. Drayton’s garden.”

“I don’t remember you,” Drayton said abruptly, pointing a long finger.

The man smiled. “We’ve never met. I know you only by your considerable reputation. I wasn’t even born when you went before the Council.”

Drayton squinted and pursed his lips. “Born? You were born into the Council?”

“Why do you find that odd?”

“I thought… I was under the misapprehension that…” Drayton’s voice trailed off.

The man pulled up a chair and sat down next to Drayton. “That to be a member of the Council, one must have participated in and survived the Regression of ’45?” He looked at the ceiling and laughed. “Even we have to reproduce. Did you think that we’re immortal?”

Drayton didn’t answer.

Do you think that the Council members are immortal?” he asked, more pointedly. “Not quite. We still have a few bugs to work out. Our gene pool could use a little refining, too.” He turned and smiled engagingly at Patricia. “Right, Dr. Bedford?”

Patricia averted her eyes. His were too intense. His mouth was too perfect and his cheekbones were too high and she was still getting her bearings.

“My name is Asimov Liu,” he said. “Does the name Asimov mean anything to you?”

Drayton chuckled dryly and clasped his hands around his knees. “A little quaint, don’t you think?”

“I think so too, but I had no choice in the matter.”

“What are you?” Drayton asked.

“What do think I am?”

“You are a genetically engineered, enhanced human,” Drayton said.

“I am.” Asimov said, leaning in toward Drayton. “You can spout those words, Dr. Drayton, but you have no idea what they really mean.”

Patricia wondered if she was imagining the bitterness in Asimov’s tone.

“I am a member of the Council because that is my design.” He shrugged. “I can take no credit for it. My specialty is, of course,” and he gave them a stiff smile, “robotics.”

Jim walked toward Asimov, although no one had commanded him to move.

“See? Jim is drawn to me.” Asimov tipped his head to Jim and the robot reciprocated. “Did you know that robots have body language?” Asimov reached for Jim’s data port. When he faced Drayton and Patricia again, his eyes were dark and stern. “I have brought you here for safekeeping until your hearing, Dr. Bedford. But my real concern is your robot. It is urgent that I bring Colter into custody. I must leave you in Jim’s care,” Asimov said.

Jim stepped away from Asimov and put his hands gently on Dr. Drayton’s shoulders.

"But, first, Dr. Bedford, I require your digipass,” Asimov said.

Patricia hesitated.

“Dr. Bedford, I know this is all very disconcerting. Custody implies safe-keeping, not merely arrest and confinement.”

“Are you asking me to trust you?” Patricia snapped, annoyed with herself as she handed the digipass to Asimov.

He brushed his fingertips over the digipass and then returned it to Patricia with a dazzling smile. “Trust me? Of course not.” His body began to slip into the fabric of the room. “Dr. Drayton, talk to her. She should know better than to trust anyone on the Council.” And Asimov disappeared.

Posted by Kathy at 09:20 AM | Comments (0) | TrackBack

July 30, 2004

The Council, Installment #3

The bird on Dr. Randall Drayton’s windowsill chirped in alarm and flapped out the open window. Patricia turned to see what had startled it.

The room seemed out of focus. She rubbed her eyes.

There was a man’s head in the middle of air in the kitchen. Patricia opened her mouth to scream but no sound came out. A body followed, as if the man was stepping through a slit in the fabric of the room. Dressed in a long gray tunic and trousers, he was so tall that Patricia had to crane her neck to look up at him. His long, black hair was brushed off his high, golden forehead and tied back. Standing very still, he surveyed the room with large, dark, canted eyes.

Patricia struggled to make sense of him. The fact that he was Oriental tempted her to believe that he was real. She wouldn’t have been inventive enough to conjure him.

Randall gripped Patricia’s hands, staring at the intruder, but his gaze was fixed on the man’s chest and not his face.

Patricia saw it, then. The 3 Score and 10 logo was emblazoned on the front of the gray tunic.

“Dr. Drayton,” the man said in a voice that smoothed the ragged places in Patricia’s nerves, “for Dr. Bedford’s sake, please do not resist.”

Randall released his grip on Patricia’s hands. “You’re taking me?”

The man’s eyes narrowed, but his voice remained gentle. “Under the circumstances, you both require our protection until Dr. Bedford’s hearing.” He turned to Jim. “We’ll need to take your house robot, too.”

Jim blinked and then moved closer to Patricia and Randall.

“All of you, step this way,” the man said.

And Patricia felt a brief, weightless euphoria as the room disappeared.

*** *** ***

At the street level, humans clogged the walkways despite the efforts of civil officers and robots to direct traffic. They jostled and shoved Colter but he managed to make forward progress.

He felt a tug on his shirt and turned to see a female robot.

“Where are you going?” she asked.

Colter tried to answer, but his programming blocked a response to the question. He stared into her large, blue eyes.

“You can’t pass through the Gauntlet,” she said, blocking his path.

Colter had no response. The word was not in his data banks.

“I know who you are,” the robot whispered. “They’re looking for you. I can help.”

“I must keep moving.”

“Yes, of course you must. But you’ll be intercepted.”

“I must--”

“You must beam with me.”

Colter felt her hands on his data port. He blinked as his buffers automatically prepared for the possibility of a power surge, and braced himself for a moment of disorientation. The ragged-scalped, hazel-eyed male robot staring back at him looked vaguely familiar, but he turned and slipped into the crowd before Colter could name him.

The urge to keep moving was strong, but the melee of pedestrians thickened, impeding his progress as he neared the train platform. His body felt light and quick, but his processors seemed sluggish, as if they were laboring through unfamiliar layers of programming. It wasn’t until he reached the train platform that he realized he was using some of his higher order functions.

It wasn’t until he submitted to the first security scan that he remembered what the Gauntlet was.

And it wasn’t until he cleared the scan and boarded the train that he realized his body proportions had changed.

Colter didn’t have time to examine himself; the passengers packed into the train, sweeping him along in a press of human bodies. The seats were all taken so Colter grabbed a hand bar. His unfamiliar body was squeezed between two men who locked eyes with each other and then gawked at Colter’s chest, grinning at him. He felt a weak response in some of his neural pathways, but the signal was confusing and illogical. He looked down.

He was wearing a blue blouse of soft fabric that crisscrossed between ... breasts.

Colter struggled to find a thread of logic and continuity. He didn't remember being female.

The train was moving slowly, well below its normal speed, and the passengers shouted their frustration. After a few minutes, it lurched to a halt.

“What the --” bellowed one of the men standing next to Colter. “This is no damn place to stop!”

The passengers gasped in unison as the interior lights flickered and died. Colter’s vision tracked a boarding troupe of officers and robots. In the murky darkness, the entourage turned their intimidating search beams on the passengers, scanning human retinas and robotic signatures.

Colter stood very still as a robot confronted him with the beam. A sluggish thought made its way to his consciousness. It tripped a flag, and like a train switching tracks, a safer thought popped up in its place. “I am late for an important function,” he said to the naked-faced robot, shifting his weight so that his hip jutted out at an awkward angle.

“Inconsequential,” The androgynous robot replied and then moved on.

The thought wormed its way up to through layers of Colter’s awareness again. “I must get through the Gauntlet,” he acknowledged. The command had no reference, but it was the number one priority at the moment. He queried his reference banks again. They were not in order. Especially disordered were the ones that identified him as Colter. “On the surface, I am Lyra,” he discovered.

Lyra. The female robot. He queried Lyra's function.

As if to answer, Lyra-on-the-surface smiled at the man next to her. The man stopped cursing and muttering, and straightened his clothes, smiling sheepishly back at her.

“I am Colter underneath,” Colter asserted. Deep inside, Colter observed Lyra and wondered if she was just as confused to be in his body.

*** *** ***

Lyra put a stocking cap over her head. It was Colters head, she reminded herself and its scalp was hideous because Colter had ripped the hair off it so he would be less recognizable. Asimov had given Lyra the cap for this purpose. He'd thought of everything, as usual.

Lyra had to move against the flow of pedestrian traffic. It was slow going. Asimov had warned her about response lag due to the layered programming he’d given her, so she was relieved when, once she'd cleared the crowd, her large, boot-clad feet easily obeyed the commands for long strides. Perhaps Asimov, an enhanced human, could not imagine that a machine like Colter could be so marvelous.

She should not be running, now that she was away from the evacuation area. Running would draw attention to Colter’s body. She slowed to a power-walk. The urge to keep moving drove her hard. Coordinates trickled through her brain alerting her that she was nearing her destination, but they told her nothing about what to expect when she arrived.

Lyra appreciated the heavier and stronger male body. It was liberating to be free of the Lyra exterior. Even though Asimov had purchased her from the pawnshop where she’d been discarded by her former owner, and had refurbished her with new functions, some of the old programs haunted her like ghosts in her circuits.

Now those ghosts mingled with the Colter functions left in place to help her coordinate his body. Lyra remembered how Asimov had laughed as he was programming the dual beam protocol for rescuing Colter. “We can’t have you strutting around and batting your eyelashes in Colter’s body. Perhaps this process will clear those old functions permanently. And you’ll have to be able to manage about thirty kilos more than your accustomed to, Lyra.”

Having bonded so closely with Asimov, Lyra queried the possibility that he would reject her in this new body. Perhaps she’d find herself abandoned in a pawnshop again. And as if Asimov had anticipated this query, the answer came to her.

Her primary purpose was to help Colter get through the Gauntlet to safety. Nothing else mattered.

Lyra considered this. Her loyalty to Asimov surged. Asimov had important work to do and he needed Lyra’s help. There were so few enhanced humans and sentient robots on their side. Asimov had tried to make her understand what was at stake. He’d shown her words and images of war between robot factions to establish dominance. Of contract terrorism used by enhanced humans to control and cull the normal population. “This is the world The Council is slowly and effectively building,” Asimov had told her. “It is a world where nothing will exist that does not serve The Council.”

Lyra could not understand it. But it satisfied her need for purpose. She was made to serve, and this work would serve more humans than the work she did before.

And even now, Asimov was risking his place on The Council, and therefore his life, to intercept Patricia Bedford before harm could come to her because of Colter’s actions.

“You help Colter, and I will help Patricia,” Asimov had told her.

Lyra had so many questions. Why was Colter so important? Why was Patricia in danger? Through the confusing layers of her Colter-Lyra consciousness, she deduced that Colter and Patricia must be valuable in Asimov’s work.

But the most perplexing question still looped around her logic circuits unanswered: did Asimov know that Colter was going to blow up his master’s apartment? How else would he have been prepared with the dual-beam program and the rescue plan?

A flag tripped Lyra’s thoughts. In a human, it would have seemed like a flicker of doubt, but in a robot, it was like a low-grade alarm.

If Asimov could be disloyal to the Council, then he could be disloyal to anyone or anything. If Asimov knew about Colter’s plan, he could have prevented Colter from doing an act that would put himself and Patricia in danger.

Whose side was Asimov on? The alarm persisted. What was Lyra’s purpose? To serve Asimov? Or to serve the humans?

As if doing so would clear the alarm, Lyra ran as fast as Colter’s legs could carry her toward the green space where the coordinates were guiding her.

When a tall, black-skinned woman materialized in front of her, Lyra couldn’t stop. Lyra slammed Colter’s body into the stranger’s and then the world disappeared.


Posted by Kathy at 10:30 AM | Comments (3) | TrackBack

July 21, 2004

The Council, Installment #2

Read Installment #1

The minutes dragged on while Patricia waited for Jim to bring back her digipass. She declined the second sandwich and nibbled her fruit salad instead as Randall continued his description of the Council’s interrogation process.

“They think as one,” Randall said. “Don’t let that spook you.”

“Think as one? I thought you said that they speak as one.” Irritation at Randall’s vagueness pricked at her barely contained composure. “You’ve seen my files, Randall. What work do you think is questionable?”

A small brown bird landed on the window ledge and cocked its head expectantly, drawing Randall Drayton’s attention. Instead of answering Patricia’s question, he swept some crumbs from the tablecloth into his hand and stepped to the window, unlatching the bottom half of the screen and dropping the crumbs on the sill. The bird hopped in and began to peck daintily.

“Randall?” Patricia’s foot began to tap impatiently under the table. “What do you mean, they think as one?” she pressed.

Randall turned and stared, his brow furrowed. “You look as if you’ve never seen a common house sparrow, Patricia,” he said. “You really should get out more.” He shuffled to the table and stared into space as he took another sip of tea.

“Dr. Drayton? Are you all right?”

Randall regarded her with a slack, unfocussed expression.

Jim returned. Without taking his eyes off Randall, he handed Patricia her digipass. “It is undamaged,” he said. He slipped a pouch from his utility belt and turned his back to Patricia, blocking her view of Randall. A second later, she heard the faint hiss of a medijector.

“I should be going,” she whispered, wondering if she was safe here and beginning to doubt the credibility of anything Randall had said so far.

“But we haven’t even scratched the surface of what you need to know when you face the Council,” Randall protested, his eyes once again clear and focused.

Patricia pushed back her chair and stood up. Jim turned slowly in her direction. His eyes remained fixed in their sockets as his head moved, as if he were a camera panning the view. His mouth formed a bland robot smile that failed to reassure Patricia. Jim cocked his head like the bird that was still on the windowsill. “You should not go home, Dr. Bedford.”

“Why shouldn’t Dr. Bedford go home, Jim?” Randall asked in an unnaturally calm tone.

“Your orders, Sir, were to monitor newscasts today and alert you only if something unusual warrants your attention,” Jim said.

“Yes, Jim, those were my orders.”

“Something has happened that warrants your attention.” Jim still hadn’t taken his eyes off Patricia.

“What has happened, Jim?”

“There’s been an explosion in the vicinity of Dr. Bedford’s residence.”

“What do you mean, ‘in the vicinity’? Randall asked.

And a word exploded in Patricia’s brain. “Colter!”

* * *

After the explosion, alarms and loud human voices jarred Colter’s sensors, though muffled by the walls of the utility closet where he waited. He calibrated the sensory input with the new options he’d programmed in advance.

Mixed synthetic and organic compounds in the air. Smoke. Ash.

Avoid detection.

Damage confined to firewalls of Patricia’s apartment.

Avoid detection.

No humans harmed.

They will come. Avoid detection.

Remote laboratory camera and cable intact.

Avoid detection.

He switched on the tiny camera he’d planted inside Patricia’s apartment, set himself for timed deactivation so no one could trace his bioelectronic signal, and fell into oblivion.

* * *


Shakti Nmumbu slipped her six-foot-eight frame through a vertical slit in the molecular teleport membrane as casually as parting a curtain, and entered Patricia’s apartment.


Oblivious to the chaos of panicking humans running out of the complex, Shakti searched methodically through the burning rubble. Perspiration shimmered on her blue-black skin briefly, but sensors in her hair follicles reacted to the heat in the apartment and began to cool her before she noticed the discomfort. As she worked, touching, sniffing and tasting the charred objects, she sent a constant stream of data to the others on the Council.

Shakti examined a melted patch of synthetic hair and a scrap of bioneural insulation, allowing herself a momentary pang of disappointment. This evidence suggested that Dr. Bedford’s house robot had been a casualty of the explosion. But to be sure, she’d have to send a team to sift through every molecule in the place.

“I can’t decide if I admire you or feel sorry for you, Dr. Bedford,” Shakti said aloud, placing samples of debris in her pouch. This explosion was baffling. It bore none of the marks of The Council or its contract terrorists. “It’s just too damned convenient that this incident happened while you were visiting Dr. Drayton.”

Shakti hadn’t yet determined the explosive agent, though it seemed to have been perfectly measured and set to confine the damage to Patricia’s apartment. “It’s tragic that you lost your house robot and your work files,” she added, hoping the Council would appreciate her sarcasm.

She picked up a metal fragment, slicing her finger on a sharp edge. Her hand flew to her mouth, and she was tempted to lick the blood off her finger, but she resisted. It was better to let the nanobots in her bloodstream mend the wound. “If you were hoping to get sympathy from the Council, however,” she said, taking another look around before she summoned the teleporter, “you miscalculated.

"And," she sighed, "I’ve got to start bringing someone along with me on these cases. If not a human, then at least a robot. I’m starting to talk to myself.”

Taking a last look around, Shakti noticed a section of ceiling tiles undamaged by the blast. It was miniscule, only a few square centimeters. She could have reached up and extracted it, but she didn’t want to go to the trouble. That’s what the Sifters were designed to do. She sent a note tag for them.

The teleporter arrived, looking merely like a distortion in the space in front of her. Shakti parted the air and disappeared.

* * *

Patricia tore her eyes from the holospheric recording of the newscast. “Haz Mat teams are combing the area, even though there are no reports of human casualties. Officials will neither confirm nor deny whether the explosion dispersed radioactive or biological toxins,” the devastatingly beautiful female anchor said, “although they are evacuating a five mile radius.”

Patricia put her commpac to her ear one more time, but meeting silence, she flipped it shut.

“No response?” Randall asked gently.

She shook her head.

“That doesn’t mean he’s…”

“I know,” Patricia snapped before he could say the word.

“I never thought it would happen here,” Randall sighed. “Never thought anyone would consider the Midwest important enough to attack.”

“That’s why I settled here after I finished my tour in the Homeland Guard,” Patricia admitted. “I wanted to live someplace where I wouldn’t always be looking over my shoulder.”

“Maybe it wasn’t a terrorist. Just some kook.” Randall paced the kitchen.

“They’re all kooks, but if you mean it could have been a random kook and not an organized terror group… I guess that’s possible.”

Jim spoke. “Consider the odds, Patricia, of a random attack on your dwelling.”

Patricia began to tremble, almost imperceptibly. It began deep inside, as if her viscera were quaking. “I’ve tried to be compliant. I’ve never done anything to draw attention…” her voice rose and attenuated, a pathetic keening sound.

Jim took a step closer to her, cocking his head again, dilating his pupils, as if he’d never seen a human in such distress. “Dr. Drayton?” he began, and then his expression and posture froze.

Patricia looked imploringly from Jim to Randall, wishing she could trust at least one of them to remain functional.

“It’s all right, Patricia. He’s processing something.” Randall took Patricia’s hands in his and waited.

“Dr. Drayton?” Jim repeated, “It is no longer safe for you or Dr. Bedford to remain here.”

Randall steered Patricia to a chair and made her sit. “Your greatest and most threatening work may not have anything to do with DNA and homeland security, Patricia,” he said.

This time the thought breathed itself to life and she whispered, “Colter.”


* * *


Colter began to dream. He knew that it was only a function of his memory database resorting itself upon reactivation. The dream was vivid. He was standing over Patricia’s bed, watching her sleep. When she was awake, her face was never still, but now, it was relaxed, allowing him to measure her features. According to human standards, Patricia was considered attractive, but not beautiful. This could be explained by the measurements. Her right eyebrow, for example, was a centimeter higher than her left. The tiny flaws in the symmetry were barely noticeable, especially when her face was animated in conversation or intently set on work. But, according to his measurements, humans like Patricia were subtly defective.

Still in the dream state, Colter watched Patricia turn over in bed. He felt a surge in the program for protectiveness and loyalty. Statistically, she had many more perfect features than flawed ones. Perhaps that’s why she functioned so well.

Abruptly, a secondary program requested a rationale for measuring Patricia’s features. Colter analyzed. Before he could answer, Patricia faded and new images replaced her.

The woman he saw was perfect.

Colter came to full alertness, realizing that he was receiving the images from the camera he’d set in Patricia’s apartment; the recording of the woman collecting evidence.

Jim was correct about the Council humans, Colter reasoned as he measured the woman’s features. The Council humans were different. Not like Patricia. Better. But how?

It would take the Council humans an hour or two at most to conclude that Colter hadn’t been deactivated in the explosion. They would come looking for him, just like they would be looking for Patricia. And Drayton. And Jim.

Colter needed one thing from Patricia’s apartment. A sample of the enhanced human’s cellular material. A single hair, a skin cell, a piece of fingernail. For Jim. Jim would find a way to examine it.

Colter had to force the door; the heat had welded it shut. Patricia’s apartment was still smoldering. But there was nothing in it that couldn’t be replaced. Colter had made sure of that beforehand. He’d stowed Patricia’s work files in his own databanks. He would keep them safe for her.

A brownish red droplet, vivid in the black and gray ruin, caught his eye. He scooped it into a vial.

Avoid detection. The alarm clamored in his brain.

Colter tore his hair from his scalp. They would not be looking for a bald house robot model. He shuffled his programs, giving priority to the ones that would get him to safety, and then he turned off his high-order functions. This changed his bioelectronic signature, making him harder to identify. And if he were captured, he would not be able to give anyone data that would incriminate Patricia. His captors would have to extract it from him.

Pressing his body through the cracked-open door, Colter walked stiffly toward the stairway. Behind him, the air rippled and a throng of bald robots stepped into view in the middle of the hallway.

Colter shut the door behind him. He did not recognize them so he did not concern himself with them.

He didn’t have the vocabulary at the moment to call himself a fugitive. Colter simply obeyed the drive to keep moving.

Posted by Kathy at 09:11 AM | Comments (3) | TrackBack

July 16, 2004

I, Speculist

The Singularity Institute for Artificial Intelligence has put together a very cool website in conjunction with the relase of I, Robot.

Check it out...

Don't miss these interesting reflections on the Three Laws of Robotics, including one by our good friend Michael Anissimov, whose name — coincidentally, I'm sure — kind of sounds like "Isaac Asimov."

UPDATE:

Saw the movie over the weekend; found it somewhat disappointing. In line with Mr. Farlops' concerns (see comments) I think the really intriguing ideas get drowned out by formulaic action movie/cop movie tropes. Too bad.

Kurzweil provides a link to this article on the Three Laws. Money quote:

"Asimov's laws are about as relevant to robotics as leeches are to modern medicine," says Steve Grand, who founded the UK company Cyberlife Research and is working on developing artificial intelligence through learning. "They stem from an innocent bygone age, when people seriously thought that intelligence was something that could be 'programmed in' as a series of logical propositions."

Our friend ChefQuix says pretty much the same thing in the comments, below.

(Press release follows.)

SIAI RELEASES WEBSITE ON AI ETHICS COINCIDING WITH "I, ROBOT" FILM

ATLANTA, GA - In anticipation of 20th Century Fox's July 16th
release of I, Robot, the Singularity Institute announces "3 Laws
Unsafe" (http://www.asimovlaws.com). "3 Laws Unsafe" explores the
problems presented by Isaac Asimov's Three Laws of Robotics, the
principles intended for ensuring that robots help, but never harm,
humans. The Three Laws are widely known and are often taken
seriously as reasonable solutions for guiding future AI. But are
they truly reasonable? "3 Laws Unsafe" addresses this question.

Tyler Emerson, Executive Director of the Singularity Institute:
"The release of I, Robot is a wonderful chance to engage more
people about the perils and promise of strong AI research. The
constraints portrayed in I, Robot appear extremely dangerous and
excessively lacking as an approach to moral AI. The Singularity
Institute's detailed approach, by contrast, utilizes advanced
technical research for creating a mind that is humane in nature."

"3 Laws Unsafe" will include articles by several authors, weekly
poll questions, a blog for announcements and commentary related to
I, Robot and the Three Laws, a free newsletter subscription, and a
reading list with books on relevant topics such as the future of
AI, accelerating change, cognitive science and nanotechnology.

The Singularity Institute's Advocacy Director, Michael Anissimov:
"It is essential that more considerate thinkers get involved in
dialogues of AI ethics and strategy. Although AI as a discipline
has a dubious history of false starts, the accelerating growth of
computing power and brain science knowledge will very likely result
in its creation at some point. In the past few years, technologists
such as Ray Kurzweil and Bill Joy have been informing the public
about this critical issue; but much more awareness is now needed."

The Singularity Institute for Artificial Intelligence (SIAI) was
founded in 2000 for the pursuit of ethically enhanced intelligence
by creating humane AI. SIAI believes the ethical and significant
enhancement of intelligence will help solve contemporary problems,
such as disease and illness, poverty and hunger, more readily than
other philanthropic causes. SIAI is a tax-exempt non-profit
organization with branches in Canada and the United States.

Posted by Phil at 09:33 AM | Comments (7) | TrackBack

July 07, 2004

The Council, Installment #1

Stephen Gordon sent the first draft, a well-crafted premise for a story about ... well, I'll let Stephen elaborate in another post.

Here's what's become of "The Council:"

Patricia Bedford’s robotic butler was standing in the middle of her doorway, blocking her exit. Though it was odd behavior, it would be a waste of time to tell him so. “Colter, please run your diagnostic,” she said. “I have to leave now. You know how tight my schedule is today.”

Colter’s pupils constricted as he redirected neural pathways to process Patricia’s commands. Most people wouldn’t have noticed, but Patricia Bedford’s attention to detail was one quality that set her apart from her peers in genetic research. Colter’s response lagged a full second longer than it should have, confirming her suspicion: the robot was malfunctioning. It was hard to believe, but undeniable. Fierce objectivity was another attribute that served Patricia in her work. But she was already two minutes behind schedule, and didn’t have time to deal with it. If she missed her train she would have to wait ten minutes more for the next one.

“Colter, new command! Run full diagnostic. Ignore today’s agenda until I give clearance.” She flicked a piece of lint from her dark suit as she left the apartment, wishing it were as easy to brush aside the sense of foreboding rising in her stomach. Her current research demanded rigorous discipline, and with Colter’s assistance, she’d settled into a strict routine that hadn’t varied in months. She’d reprogrammed him to do high-order tasks not normally given to robots and he’d functioned brilliantly until now. In fact, the only reason she hadn’t published a paper about it was that she couldn’t afford the controversy.

The malfunction was probably her fault.

Patricia’s soft-soled shoes muted her footsteps along the stainless steel walkway leading to the trains. Like the few neighbors in her complex who commuted to work, she practiced an expression that was polite but not inviting. She had too much on her mind for small talk.

Reaching the toll slot, Patricia reached for her digipass, but her hand came away empty. The pass wasn’t clipped to her pocket.

Foreboding turned to dread. Colter had never failed to organize her accessories. She turned to retrace her steps and bumped into something solid, finding herself looking up into Colter’s face.

In a motion surprisingly graceful for a robot, Colter’s arms encircled Patricia before she could stumble. As she regained her balance, he released her and dangled the digipass in front of her. If it were possible for a robot to look sheepish, Colter would have.

“Colter, run your diagnostic and send me the results,” she muttered, as if speaking in a normal tone might embarrass him. She yanked the digipass from his hand, quelling her frustration before it got the better of her. She’d missed her train and snapping at her robot wouldn’t change that.

Patricia was surprised to find a seat on the 6:50 train. She always had to stand on the 6:40. She caught her own reflection in the window. In the fluorescent glare, her features looked harsh and pale. She pulled her hair from behind her ears and tousled her dark, shiny bob with her fingers before turning from the window. She didn’t have time for vanity either.


There was a robot standing in the doorway to Patricia’s office when she arrived. It was one of the sexless models that gave her the creeps with its bald head and naked face and ambiguously sensuous lips. It turned its lidless eyes to meet hers and she felt the hair on the back of her neck prickle. Not one given to flights of imagination, she nonetheless felt judged and found lacking by this arrogant-looking machine. Her thoughts turned to Colter, her sandy-haired, hazel-eyed model. “A splendid blend of Celtic features, an archetype designed to subliminally reassure a person desiring a sense of security and trustworthiness,” the brochure had promised. She’d purchased him because, as an unattached female working in a closely scrutinized field of research, commuting at odd hours, she needed a robot that was trustworthy and strong.

"Dr. Patricia Bedford?" The robot process server held an envelope in its extended hand. The envelope displayed no address or identifying markings of any kind except a distinctive raised logo: three androgynous faces in profile - young, middle-aged, and old – surrounding a small decagon.

Patricia's heart sank. "Yes?" She knew why she was being served even as she was being handed the package.

Patricia was being served by the "3 Score 10” Council.

She pulled a single sheet of paper from the envelope. The Council logo was artfully integrated with the letterhead design.

Dr. Patricia Bedford
4583 Michigan Avenue
Chicago, IL

RE:

Bedford:

Your presence is required at the offices of the “3 Score 10 Council”
October 8, 2084. 9:00 a.m. EST.

3 Score 10 Council

Patricia stared at the empty “regarding” line. There was no need to spell out the subject of the summons. The Council could call up anyone at will for an alleged trespass of the law. This possibility had dogged her for her entire career, and she had bent over backwards to remain above reproach. DNA Studies was the most highly regulated of all approved sciences. If it weren't for its importance in strategic defense, her field would have been the first to be eliminated.

The power of the Council could not be denied. Active members had even been given tax amnesty. "I thought taxes were a certainty," Patricia had said to Colter when she heard it on the news while eating the cake he had made for her thirty-fifth birthday.

“”Only two things are certain: death and taxes,’” Colter had quoted. “An ancient aphorism attributed to Benjamin Franklin.” And then, his mouth had turned up in a stiff robot grin, as though he appreciated the irony. As his features settled back into his normal engaging, attentive expression, he said, "Perhaps since the Council members are being paid with tax money, it simplifies bookkeeping for the government. "

"Not being taxed would simplify things for me, too,” Patricia said around a mouthful of cake. “Anyway, other government employees pay taxes, why should the Council be exempt?"

Colter had answered, "Maybe they took a paycut to get it."

"Wanna bet on that?" Patricia had retorted. It was only after she had gone to bed that night that it had struck her: Colter’s remarks had not come from any data base she recognized. They had been speculative. And his language had been flawlessly colloquial.

She’d tossed in her big, empty bed. She wasn’t familiar with anyone else’s household robot, so she had no reference with which to compare Colter. Burying her head in her pillow to stifle the self-deprecating chuckle that had threatened to erupt, she’d realized that Colter’s social life was probably more developed than her own. At least he got out of the house to do errands.

She’d just celebrated her thirty-fifth birthday with a robot.

Patricia had sat straight up in bed then. At thirty-five, she was exactly halfway to three score and ten. If she was anxious about the ticking of her biological clock, it wasn’t concerning babies. It was about getting her work out there. If she could finish, then , if someone found fault with her scrupulous methods, she wouldn’t have failed utterly. No one could completely suppress the results.

She was so close.

In the days leading up to her appointment, Patricia worked solely from home. Colter’s diagnostic program suggested some routine maintenance she could accomplish online. She restricted him to household functions and began the laborious task of gathering the documentation of her work.

Colter spoke very little and Patricia fought the temptation to project anthropomorphic causes for his sudden terseness.

One morning, she found that Colter had rearranged the files on her desk. Prominently out of order was one labeled “Dr. Randall Drayton.” Patricia calmed herself with the notion that it was just a coincidence. Colter couldn’t possibly know anything about Dr. Drayton’s research or his history.

But, she acknowledged, it wouldn’t hurt to seek the advice of the elderly colleague. Although they hadn’t spoken in years, Dr. Drayton took her call and eagerly set up a meeting at his home.

Dr. Randall’s house was small but comfortable and attractive, a refreshing contrast to the sterile, inner city apartment complex where Patricia lived. As she got out of the autocab, she noticed the robotic lawn man cheerfully weeding the flowerbed. He stood up as she approached, "Good day, Ma'am. Who may I say is calling?" he asked, doffing his hat to reveal salt-and-pepper hair.

Patricia couldn’t help smiling. This model was designed as an older man, complete with crows’ feet and deep smile lines. The calm, deep-set eyes held uncanny dignity.

"Dr. Patricia Bedford," she answered, and tipped her head as if in deference to an elder.

"Very good. You're expected. " The robot’s eyes caught the sunlight in a most human-like twinkle.

He escorted Patricia inside Drayton's study. “Dr. Bedford has arrived.” The robot’s voice rang like a herald into the dim recesses of the room.

Drayton stopped rifling through the computer files displayed on the top of his desk. He waived his hand over the desktop and the image disappeared. "Good morning Patricia," he said, turning and standing in one fluid motion.

She reached out to shake his hand, "Dr. Drayton."

"Please, call me Randall. Except for that project on which we consulted in Miami, I haven't been active professionally in almost thirty years."

Patricia guessed that he was past his own "three score and ten" by at least a decade, but the years had not dimmed the intelligence she saw in his eyes. Without a word, she presented her summons and then briefly explained the nature of her work. She did not have to explain why she was seeking his advice. His "retirement" thirty years earlier had not been voluntary.

“Come with me to the garden,” he said, handing the summons letter back to her. “It’s resplendent in its late summer excess. Jim hardly has time to engage me in chess these days,” Randall chuckled, waving to the robot, who had resumed his place weeding a patch of purple delphinium. Abruptly, Randall turned to Patricia and the amicable light in his eyes ignited with passion. “No matter how many years one is given, life is too short.” He touched his forehead as if remembering something. “Forgive me,” he said. “Mid-September has that effect on me.”

Patricia did not ask him to elaborate.

As they strolled a worn sandstone path, Patricia inhaled the earthy aromas of mature foliage basking in warm sun. She hadn’t been outside in months. Her mind slowed its racing, her lids drooped.

"When you go in, show deference to the Council, but do not admit any wrongdoing,” Randall said, startling her into remembering why she had come. “Remind them that studies in sanctioned sciences can often lead to inferences in forbidden areas.” He pointed a long finger in her direction. “So long as the inference is not intentional and the experiment advances sanctioned science, you should have nothing to fear."

"Should have?" She didn't like that emphasis.

"If your research is too close to a sensitive area, it doesn't matter what your intentions were or whether sanctioned science was advanced.”

“Is that what happened to you?"

Randall’s mouth drew a thin-lipped smile.

"How am I supposed to know if I get too close?” Patricia pressed. “Beyond very general outlines, they won't even allow discussions of forbidden areas." Her complaint sounded obvious and naïve to her own ears.

"You can't know. That's the risk you take in your area of study." Randall plucked a mauve chrysanthemum and handed it to her. “But you knew that when you began,” he said.

From the corner of her eye, Patricia saw Jim standing very straight and still, as inanimate as a garden statue. She felt an illogical sense of disappointment. Jim was a robot, after all.

Why, then, did she feel so bereft?

As if he read her mind, Jim cocked his head at the sun and then walked over to them. “Will our guest be staying for lunch?” he asked.

Randall studied Patricia’s face. With one of his long, elegant fingers, he wiped a tear from her eye as if gathering dew from a flower petal. “Please,” he said. And Patricia wondered at the ambiguity of that word.

As if the thread in his narrative of the Council had never been broken, Drayton picked it up a few moments later, over chicken salad on fresh raisin bread, "They ask questions and then retire in private to consider their joint ruling.”

“What kind of questions?” Patricia asked, somewhat distracted by the intense flavors of the homemade meal. “What joint ruling?”

“They speak as one," Drayton said, chewing methodically and taking a long drink of tea. He thought for a moment and then added, "But I guess you do get an idea of who is most hostile from the questions they ask. There was one tall gentleman, had to be six-five, grilled me all afternoon. I was held over to the next day."

Jim leaned close to Patricia to refresh her iced tea. Wearing a towel draped over his arm like a waiter, he moved with solemn precision. Patricia gasped when the towel slid off his arm and fell onto her lap, dislodging her digipass from its clip.

“I beg your pardon,” Jim said. He examined the digipass. “It’s damp from the towel. With your permission, I’ll dry it and make sure that it isn’t damaged.”

Patricia looked at Randall for reassurance. Randall nodded, and Jim took the digipass, leaving the room.

“A very trustworthy model, my Jim,” Randall said. “More chicken salad?”

Posted by Kathy at 07:41 PM | Comments (7) | TrackBack

June 14, 2004

Tough Questions

Steven Den Beste has published a couple of thought-provoking essays recently on the topics of consciousness and identity. He raises a number of stumpers, of which I found three particularly interesting:

At what point is it accurate to say that a victim of Alzheimer's disease has died?

Is there really such a thing as identity or is it an illusion?

What are the ethics of owning an intelligent machine?

The first question has to do with the death of the "self" which a degenerative disease slowly brings about. Ultimately the damage to the brain is so profound that the person we knew is lost. Den Beste points out that this is what occurred with President Reagan: although his heart stopped beating last week, the man who led our country was gone long before that.

Should there be a definition of "death" that includes the loss of identity, the loss of self that occurs with a degenerative disease? I think not.

In other eras, it might have made sense to come up with such a definition. (I say might.) But today? I don't think so. Things are changing too rapidly. Not only are we learning more and more about what causes Alzheimer's disease — and, by extension, what might be done to prevent it — real strides are being made towards developing effective treatments for the disease. And even bigger breakthroughs are on the horizon. We may not be that far from finding a way to reverse "irreversible" brain damage. So the great danger of a degenerative definition of death is this: we might write someone today off as lost forever only to find in a couple of years that the person we knew can be restored to us, after all.

If President Reagan had been 10 years younger, and his fight with Alzheimer's was staring today rather than a decade ago...who knows?

[Steven's question also brings to mind the ongoing cryonics debate, about which I would have written something had Rand Simberg not beaten me to it. Rand relates the unsettling story of a man with an inoperable brain tumor who wanted to be put in cryonic suspension. Ironically, a court turned down his request because euthanasia is illegal. But the man wasn't trying to kill himself; he was trying to save himself: that is, he was trying to avoid having a tumor grind into mush the brain tissue that defines who he is.]

On the second question, whether there really is a "there" there where individual human identity is concerned, Den Beste writes as follows:

Do I exist? In one sense, of course I do. Cogito Ergo Sum. The fact that I'm able to ask that question proves that the answer is "yes".

But the answer to the question depends on how the question is stressed. Cogito Ergo Sum says "yes" to "Do I exist?" It doesn't help us with the question "Do I exist?"

There's something that exists here. I accept that the universe is real, and that my body is part of it, and that the brain contained within that body is thinking these thoughts and controlling the fingers which type the words you are reading.

The real question is whether that organism's presumption of having a unique and characteristic identity is a fallacy, perhaps even a conceit, one based on incorrect assumptions or a faulty supposition that the subjective experience of life is a true reflection of the nature of life.

Cogito ergo sum does not answer these kinds of questions. Yes, I do think and I have a subjective experience of thinking. That proves that this organism's brain exists and operates in certain ways. But existence and identity are not the same. I exist, but I can't be sure that I exist.

Not that I necessarily have a better one to offer, but I think Den Beste's definition of identity is flawed. He hinges the notion of identity on whether it is unique and characteristic. Let's start with the easier one — characteristic. Although I'm convinced that I exist, I find that the person who I believe to be real is capable of tremendous inconsistency. I have a lot more in common with friends and acquaintances with whom I have contact in the present than I do with myself in the past. Phil-of-the-past and I are, in a very real sense, two different people. What we have in common is some memories (although, lucky me, I have many more than he does) and a subjective experience of things happening in sequence around a single first-person point of reference. Absent discussion of a metaphysical soul — which Steven rejects — that subjective experience of one thing after another from that particular point of view is me.

Unique? Why would it have to be unique? If I'm just a clone who has had Phil Bowermaster's memories grafted onto my brain and I really only just woke up this morning — well, first off, what a waste of perfectly good cloning techniques. And what did they do with the real me? But anyway, I'm still me. That is, I'm still this sequence of first-person singular experiences. I may have never really had a bunch of them, but that's true even if I'm not a clone (or a computer simulation or a brain transplant or what have you). Memory is notoriously unreliable. The point is, here I am. It doesn't matter if I am a characteristic or unique entity. I think therefore, I —

Hold it. There's an easier way of putting it.

I am, therefore I am.

Finally, on the issue of owning an intelligent being — yes, it is definitely immoral to think in those terms. I don't think that there will be an effective way to program a computer that is truly intelligent to want to be owned. Nor do I think such programming would make it okay to do so.

I doubt it will be much of an issue, however, because I don't think that homo sapiens will be calling the shots for very long after computers reach human level intelligence. Those of us who accept the notion that a technological or developmental singularity is in the offing tend to expect that any ethical issues surrounding how we're supposed to treat computers will be solved for us...by the computers. Steven uses the analogy of dogs:

Dogs represent something of a fringe problem here, so let me deal with that. We generally accept that it's OK to own dogs, and there's no doubt whatever that they like being owned by us. The question is whether dogs actually understand the relationship the same way we do, and view themselves as property and us as owners.

It isn't clear that it even means anything to ask such questions. Even if it does, it is by no means clear that dogs are sophisticated enough to understand concepts like "property" and "ownership". But to the extent that we are able to consider the way dogs think about the relationship, the most likely answer is that they do not see it in those terms.

The symbiosis between dogs and humans appears to have come about because each species came close to fitting into a role the other already knew about. The relationship was possible because those mental roles interlocked reasonably nicely. To humans, dogs come close to fitting into the role of "child". To dogs, human masters seemed to be the "alpha" members of the pack. (It's noteworthy that domestic dogs are descended from wild canines with strong pack behaviors, but not from canines like foxes which do not run in packs.) That means the whole partnership has from the first been based on a really big misunderstanding.

A misunderstanding is one way of putting it. Another way would be to say that both humans and dogs have adapted their capability for one kind of relationship into a completely different cross-species relationship with benefits to both groups. This ability to adapt and redefine relationships will probably play a big role in what happens between us an our electronic progeny.

In The Age of Spiritual Machines, Ray Kurzweil draws out a series of scenarios that show how this development might take place. The woman who leads us through the next few generations of machine evolution starts out describing "her" AI as a very useful piece of equipment: the ultimate PDA. After a few years, the AI becomes much more than that: her right-hand man, her faithful confidant. Ultimately, the AI becomes her life partner, helping her to augment and expand what she is.

Artificial intelligence may evolve from tools to friends in a very short period of time. From there, we might evolve with them, as Kurzweil suggested. But if they blast past us as quickly as some predict that they will, ultimately it will be the computers deciding whether or not it's ethical to own humans.

In the end, they might keep us around as their beloved pets. In which case we can only hope that they treat us as well as we treat our pets.




Posted by Phil at 08:11 AM | Comments (10) | TrackBack

May 24, 2004

The Meaning of Life

[There. That ought to get everybody's attention.]

Via KurzweilAI, an article in the Biloxi Sun Herald provides an introduction to transhumanism and gives a run-down on some of the pros and cons. Here's an interesting argument:

But living forever could rob life of its meaning, said Bill McKibben, author of "Enough: Staying Human in an Engineered Age." In the book he argues that without death, humans have no opportunity to sacrifice for their children, no reason to pour out a life's work under the literal deadline of mortality.

"Human meaning is more vulnerable than they imagine," McKibben said.

Well, first off: there aren't that many transhumanists who see "living forever" as being in the cards. Aubrey de Grey talks about life extension that might buy us a few centuries. Eliezer Yudkowsky has a more expansive view, promoting a Theory of Fun that would help us to make the most of a life that extends to millions or even billions of years. The only transhumanist I can think of offhand who talks in terms of "living forever" is Frank J. Tipler in The Physics of Immortality. But to object to Tipler's model of living forever is to object to the religious idea of dying and going to heaven, since it amounts to the same thing. I wonder whether McKibben has the same objections to religious ideas about living forever as he does transhumanist ideas on the same subject?

In any case, I take issue with the idea that human meaning is more "vulnerable" than we imagine. On the contrary, I believe that human meaning is much more resilient than we imagine. In Man's Search for Meaning, Victor Frankl demonstrated how people found and held onto meaning in their lives while subjected to the most horrifying of circumstances, imprisonment at Auschwitz.

If life rendered unrecognizable by the cruelist of suffering can still be found meaningful, isn't it just possible that we will be able to find meaning within life rendered unrecognizable by the removal of hardship? Maybe we won't be able to make sacrifices to our children, but there will still be things to learned, long-term projects to be taken on to provide a sense of direction and accomplishment. There will still be friendship and family. And, as Yudkowsky points out, there will still be fun.

So will life in an engineered, transhumanist future be meaningful? Personally, I'm willing to take my chances.






UPDATE: Stephen , in a recent e-mail on a related topic, wrote the following:

When Leon Kass accused life extension advocates of robbing humanity of "necessary" sorrow, I countered:

"Does anyone think that a prolonged life will eliminate sorrow? If anything you will have more opportunity to experience sorrow. In fact, if you eliminate aging as a cause of death, a larger percentage of the population will die violently than
before. You are, in effect, trading a peaceful death soon, for the chance of being offed by a jealous lover in a couple of centuries."

Exactly. Even if one must define meaning in life as coming from sorrow and hardship, there will be plenty of those things to go around in an extended lifespan. The removal of some difficulties isn't the same of the removal of them all. Life may yet be difficult, even in a transhumanist future.

(But I'll still take my chances.)

Posted by Phil at 09:13 AM | Comments (5) | TrackBack

April 19, 2004

Another Path for Ramona

One problem that is obvious to anyone who has had a conversation with chatbots is their limited general knowledge. They are often programmed to steer the conversation in certain predictable patterns – toward those areas in which they have some knowledge. Perhaps if the chatbot knew more, it could let the user lead the conversation – perhaps like a conversational version of "Ask Jeeves." Even if the chatbot took the lead, it would be much more interesting if it had wide general knowledge.

Late last month I suggested that the Ramona chatbot could be enhanced by distributed computation and training. The idea was to give Ramona the spare computation of each computer in which she is installed. Also, each user could become a trainer. This would allow Ramona to grow more sophisticated over time (because her available processing power will grow and because she will be learning continually) while serving each user as an intelligent agent.

Kurzweil A.I. published an article this morning about another way of enhancing Ramona and other chatbots - a way that has the virtue of being immediately obtainable.

Two valuable tools for A.I. researchers and enthusiasts are AIML (Artificial Intelligence Markup Language) and the OpenCyc inference engine. Before now there was no good way for AIML programmers to utilized OpenCyc. That is now changing.
The AIML-OpenCyc combination made possible by CyN (CYc + program N) "allows one of the largest, continuous AI projects to be accessed by one of the largest chatbot development communities," says Daxtron Laboratories chief scientist Kino H. Coursey. That means that "hundreds of person-years of Cycorp commonsense research is now accessible through an easy-to-use scripting front-end, and chatbots now have access to logic and inference. The lack of logic has been one of the big criticisms of chatbots.
Is this the Promontory Point between general knowledge and machine intelligence? Time will tell, but the combination of an easy to use A.I. programming language and "the world's largest and most complete general knowledge base and commonsense reasoning engine" has to be an important development for the field.

UPDATE: More from Future Norway

Posted by Stephen Gordon at 04:01 PM | Comments (3) | TrackBack

March 30, 2004

Ramona 2.0

Phil recently interviewed Ray Kurzweil's chatbot, Ramona. Phil guided her, the best he could, through his "seven questions." The result was fascinating and funny. But I think it's fair to say that Phil never forgot he was talking to a machine. Ramona seems to confuse pronouns. Not being able to distinguish between "I" and "you" in conversation can be a big problem. Ramona "likes" to direct the conversation in certain predictable ways. She wants to know your name, she wants to talk about a book she's read, a dream she's had, and her pet frog. And each time you return to Ramona it's like she's been rebooted, she cannot remember prior conversations. Ramona is not yet ready for Turing.

In defense of Ramona, she is at least on par with other chatbots out there. Present day chatbots are an interesting diversion, but they are not yet sophisticated enough to maintain a long-term relationship.

What Ramona and other chatbots like her need is greater computation, greater ability to learn and grow, and a justification for these resources - a job to do.

Disclaimer: I am not much of a computer programmer. Forgive me if I make suggestions that are not presently feasible or are naive. The ideas I'm presenting are inspired by chapter four of Ray Kurzweil's book, The Age of Spiritual Machines. If I'm being naive, it's my fault, if I make a good point, it's to Kurzweil's credit.

I propose enhancing Ramona by combining the strengths of evolutionary algorithms, a neural net, distributed processing, and distributed training. Ramona's job would become "web companion." She could suggest interesting news, give the local weather, and could make suggestions of other sites to visit based on the general interests of each user and the topic being browsed that day.

Along the way she could keep the user company and learn for each user. Ramona's current incarnation already requires a small download if you want to see and hear her speak. Why not give Ramona the unused capacity of each computer in which she is installed?

Distributed computation has already been tried with great success with other projects. SETI@home and various medical research projects depend upon the altruism of those who choose to install the program. Ramona would be different because she would be a useful application for her users. Giving Ramona spare computation would allow her to grow intellectually and in capacity as she grows in popularity.

One problem with neural nets is the amount of time it takes to train it. Here, however, each user could be a trainer. As she chats with her users about different subjects, different memes will develop and will compete within Ramona's distributed neural net according to evolutionary algorithms.

For example, one user could remark to Ramona that it was a pretty day outside and that the sky is blue. She would no doubt hear from other users that the sky is blue. She might also hear that the sky is grey or black or even red. Ramona would assign a level of confidence to both the information and the trainer as she is trained. As Ramona grows in sophistication she could learn that the sky is indeed blue, grey, black, and even sometimes red according to certain conditions - night or day, cloudy or clear.

Whenever she is in the process of learning a particular thing she will want to ask her users about it. When her confidence about a certain subject reaches a sufficiently high percentage she will consider that knowledge confirmed (more or less) and will seek to talk about other things that she is learning.

By assigning a level of confidence for each of her users Ramona will come to value the opinions of some trainers more than others. As she learns she will develop a personality apart from anything set down by her programmers.

Ramona will actually develop many personalities. The more she is used at any particular machine, the more she will tailor herself to that user. "Memories" of time spent with that user would be stored locally. Only ideas that would be of general use to Ramona would need to be distributed. Ramona's ideas about the color of the sky should be distributed, her knowledge that I like to play tennis and have a pet dog would remain local.

Here's how she could work. Let's say I'm visiting Google News. As I click on an article about nanotech Ramona could discreetly suggest further reading on the topic. I might ask her to read an article out loud. And then she might ask questions about the same subject to further her understanding. By chatting to her about it, I'd probably further my own understanding as well. Ramona's learning would be "on topic." The user would not have to pause to talk about pet frogs while in the middle of browsing news on Iraq.

Posted by Stephen Gordon at 07:15 AM | Comments (14) | TrackBack

March 26, 2004

Humans Are From Mars, Computers Are From Venus

Stephen recently provided an excellent run-down on an important issue: the effect that radical life extension will have on the institution of marriage. The romantic in me says that hanging in for centuries at a time is going to make the quest for one's true soulmate a lot more important. In the process of experiencing multiple sequential domiciles, careers, and entire lifetimes, I think people may be more inclined than they are now to look for continuity in relationships to add meaning to the experience. Marriage could become a sort of philosophical partnership in which two people create a core of stability around which everything else can and will change. In the end, marriage may serve as a lynchpin in helping the individual to retain his or her own identity. If you start to lose yourself along the way, you'll have somebody to help you remember who you are.

In any case, I think technology is eventually going to provide a bigger challenge to the institution of marriage than time will. In The Age of Spiritual Machines, Kurzweil presents a scenario in which a woman dumps her husband for her computer because the computer is better able to meet her emotional and physical needs. Has she found her soulmate, or is she engaged in an elaborate form of techno-narcissism?

Consider Virginia's recent speculation (see comments) that the Ramona chatbot was re-programmed based on my interactions. In fact — in this instance — the opposite happened. I tailored my statement to Ramona trying to get a particular response from her. (I have brown hair, not black, although my eyes are green.) But Virginia's speculation as to what had happened was quite reasonable, and reflects one of the dangers of getting emotionally involved with a computer program...

...is there anybody really there?

Today I think we can all agree that there is not. But in the future, there probably will be somebody there. The question of what a relationship with that kind of someone might mean will be a long time in the answering. (Especially since we aren't entirely clear on what our relationships "mean" now.)

Of course, not everyone is going to be looking for meaningful relationships with their computers. Full immersion virtual reality raises the possibility of limitless guilt-free, risk-free sex with an endless sequence of insatiable, physically perfect partners. In The Dilbert Future, Scott Adams predicts that as soon as the transporter and holodeck (i.e., full-immersion VR) technology as portrayed on Star Trek become available, it will be the end of more than just marriage:

There's only one thing that could keep me from spending all my time wreaking havoc with the transporter: the holodeck.

For those of you who only watched the 'old' Star Trek, the holodeck can create simulated worlds that look and feel just like the real thing. The characters on Star Trek use the holodeck for recreation during breaks from work. This is somewhat unrealistic. If I had a holodeck, I'd close the door and never come out until I died of exhaustion. It would be hard to convince me I should be anywhere but in the holodeck, getting my oil massage from Cindy Crawford and her simulated twin sister.

Holodecks would be very addicting. If there weren't enough holodecks to go around, I'd get the names of all the people who had reservations ahead of me and beam them into concrete walls. I'd feel tense about it, but that's exactly why I'd need a massage.

I'm afraid the holodeck will be society's last invention.

That's why I'm rooting for marriage. The alternatives could be the end of us.

Posted by Phil at 09:40 AM | Comments (2) | TrackBack

February 24, 2004

(Really) Virtual Sex

The big breakthroughs so rarely come from where we expect them. If an AI chatbot were going to pass the Turing Test, I would expect it to be Alice or McGonz or our good friend Ramona.

Not some porn-site sex-chat program:

But the best candidate for passing the Turing test is the Natachata program that conducts smutty conversations via text messages.

Regular users of pornographic SMS chat may be shocked to find out that they are swapping dirty talk with machines rather than young women and men.

But it's a fair bet that they are because the Natachata chatbot, written by former rocket scientist Simon Luttrell, is so widely used by porn chat merchants.

Hmm...our most cutting-edge technology meets the oldest profession. Actually, I've heard that porn sites were the first web businesses to make money. Maybe it shouldn't be suprising that the sex business provides the first money-making application of a Turing-capable (near-Turing-capable?) artificial intelligence.

The author of the BBC piece finds one aspect of this story disturbing:

Some users work out it is a machine, he said, and never come back. But, worryingly, some like the fact that it is a machine.

"There is about 5% who realise it is a computer and use it even more because of that," said Mr Luttrell.

The folks who like they fact that they're dealing with a machine may be cyber-fetishists, as the author apparently fears. But I think it's more likely that they're married men whose conscience troubles them less about their online recreational activities when there's not another human being involved. In The Age of Spiritual Machines, Ray Kurzweil depicts a future in which virtual sex with a real partner is considered sex, while virtual sex with a virtual partner is considered harmless fantasy.

Since the two experiences might be completely indistinguishable, this ethical position is going to require a certain amount of rationalization and hair-splitting. If former President Clinton is still alive, he'll probably write a bestseller on the subject. In any case, these porn customers who have stated a preference for virtual sex-chat partners may well be the pioneers of this soon-to-arrive alien ethical landscape.

We shall see.


via GeekPress

Posted by Phil at 10:25 AM | Comments (3) | TrackBack

February 02, 2004

Thinking Machines

Maybe closer than we think.

Maybe already here:

What Thaler has created is essentially "Thomas Edison in a box," said Rusty Miller, a government contractor at General Dynamics and one of Thaler's chief cheerleaders.

"His first patent was for a Device for the Autonomous Generation of Useful Information," the official name of the Creativity Machine, Miller said. "His second patent was for the Self-Training Neural Network Object. Patent Number Two was invented by Patent Number One. Think about that. Patent Number Two was invented by Patent Number One!"

As I'm sure many of you would agree, that's pretty cool.

Supporters say the technology is the best simulation of what goes on in human brains, and the first truly thinking machine.

But look how quickly the luddite/buzzkill view is to surface:

Others say it is something far more sinister - the beginning of "Terminator" technology, in which self-aware machines could take over the world.

I don't get the impression that anything Thayer is doing is quite ready to take over the world just yet. At leat not the world world. But the popular music world had better look out.

In one weekend, a Creativity Machine learned a sampling of some of Thaler's favorite Top 10 hits from the past three decades and then wrote 11,000 new songs. Some are good, Thaler said. Miller confesses to being haunted by one of the melodies in a minor key. Other offerings are the musical equivalent of a painting of dogs playing poker, Thaler said.

That sounds like a description of 90% of what's on the radio now. On the more serious side, the Creativity Machine has designed toothbrushes, robots, and processes for synthesizing diamonds. It still sounds like it's a long way from world domination. But, who knows, maybe it could be of assistance in addressing some big national challenges.


via Roland Piquepaille's Technology Trends

Posted by Phil at 08:32 AM | Comments (2) | TrackBack

January 28, 2004

Cool Site

Check out the Incipient Posthuman.

I like the name. Up till now, I was thinking of myself as an underachieving posthuman. This is much better.

I like this essay, Being Dead Sucks. Man I wish I'd said that.

I can add one to the list, too. If you're dead, you can't blog.

Posted by Phil at 09:26 PM | Comments (1) | TrackBack

January 13, 2004

The Big Time

Via Paul Hsieh, here is what may be the first (or is at least one of the first) references to the Technology Singularity to appear in the mainstream press. The coverage is disappointing. The writer has no clue about what she's writing about. She calls the Singularity "a kind of artificial intelligence." That's like calling World War II a "series of bad things that happened." And how about this little throw-away line in describing Eliezer Yudkowsky :

Like most transhumanists, he is Caucasian.

Oh, so it's just a bunch of white guys. Glad she pointed that out. For a minute there, I was almost ready to listen to what they had to say.

The whole thing is a smear job. This Danielle Egan essentially uses hipness as her yardstick for credibility; she thinks it's important to point out that Eliezer has bad posture and is a virgin, and she makes a nasty comment about "brown teeth" which I'm pretty sure is just plain wrong.

What a fascinating analysis. I wonder which high school she goes to?

Posted by Phil at 03:23 PM | Comments (3) | TrackBack

December 29, 2003

Superhuman Intelligence

Steven Den Beste has a long and well-though-out piece on how the Internet may be evolving into a superintelligent hive mind or, more properly, an environment conducive to the development of any number of such minds.

Some of the voices contributing to that cacophony will be more profound than others. With more people online and more bandwidth available, more and more hive minds will appear, and that will increase the chance that a few will transcend the norm by greater and greater amounts.

The emergent result may well be that some will exhibit behavior indicating intelligence at a level beyond that of individual humans, capable of "thinking" thoughts no single human could conceive of. Even with industrial-level technology, that's already happened. Science, in particular, is such a thing, as is modern engineering. Engineering at a primitive level has been with us since the creation of the first stone tools. But science as we now know it is very recent, only going back about 500 years (though one can identify predecessors extending back millennia before that).

A while back, I e-mailed Steven to ask whether he has any thoughts on the Technology Singularity. To my surprise, he replied that he had never heard of it, and gave the impression that he wasn't terribly interested in the subject. From reading this latest piece, I understand better why that would be the case. Den Beste posits that true intelligence may be analog, not digital, and that—because of initial encoding errors compounded by the "butterfly effect"—may never be reliably encoded in a digital environment.

If that's the case, then no amount of digital hardware, no matter how fast, parallel or well connected, can ever really be intelligent in the way that we are, with the degree of capability and versatility we have. I cannot say for certain that's the case, but I have a strong suspicion that it is. There will eventually be a computer system which can beat any human at chess. It could be built now, except that no one cares to spend the money. But that system won't also be able to drive a car, write poetry, laugh at a joke, watch a movie and then summarize it later, or do all the other kinds of things that human chess grandmasters can do in addition to playing chess.

I'm not sure I entirely understand this objection. If we were eventually to upload a human brain via advanced scanning technology and run it as an emulation, it seems to me that the initial errors an butterfly-effect compounding would impact the processes running on the emulation, not whether the emulation worked. It doesn't seem to me that it's a question of whether the emulation would be a functionaing brain, just whether it would be the same brain. In other words, if it were my brain that were uploaded, the question wouldn't be whether the emulation is capable of laughing at a joke. The question would be whether the emulation and I find the same jokes funny. Initial errors and the butterfly effect might soon see to it that the emulation and I are distinct brains with distinct personalities—but I don't see how these effects would prevent the emulation from running the same kinds of processes (that is, think thoughts of the same level of sophistication) as my own brain is capable.

If a human brain can be uploaded and can function as well as (however differently from) its original, then strong AI has been achieved and a door is opened to a very different kind of superintelligence.

Posted by Phil at 09:51 AM | Comments (6) | TrackBack

December 12, 2003

Computer Talk

In our discussion last week, John Smart had this to say about significant developments which need to occur before the most profound benefits of information technology can present themselves:

[D]eveloping cheap, fat data pipes, both wired and wireless, and a growing set of useful Linguistic User Interfaces (LUIs) are obvious candidates for our nation's greatest near term ICT developmental challenges. Just like the transcontinental railroad was a great goal of the late 1800's, getting affordable broadband to everyone in this country by 2010, and a first generation LUI by 2015 appear to be the greatest unsung goals of our generation. Now we just need our national, international, and institutional leaders to start singing this song, in unison.

Your father used a TUI (text-based user interface). You use a GUI (graphical user interface). Your kid will primarily use a LUI (voice-driven interface) to speak to the computers embedded in every technology in her environment. She'll continue to use TUIs and GUIs, but only secondarily, not for her typical, average interaction with a machine. Your grandchildren will use a NUI (neural user interface), a biologically-inspired, self-improving, very impressive set of machines.

One of the fundamental component technologies required to develop an LUI is Natural Language Processing (NLP). Via Kurzweil, we have news that NLP technology has just become a lot more easily accessible:

Zhang Le, a Chinese scientist working on natural language processing, packed the most important language-analysis and processing applications into a single Linux-based bootable CD: Morphix-NLP.

The CD includes language-parsing systems (such as part-of-speech taggers), machine learning tools, and a software-based speech synthesizer.

As predicted, developments in this field are heating up. And, as suspected, progress seems to be occuring outside of the US.

Posted by Phil at 09:06 AM | Comments (1) | TrackBack

November 26, 2003

The Love Machine

Via KurzweilAI.net, here's a WiredNews story about an effort to create computer programs that care about us. This is just what I was talking about the other day. We need the relationship between human and machine to be one of friendship.

In this case, the computer program cares for her human friend by nagging him to work out more. The nagging is presumably mitigated by the fact that she is depicted as a total babe. This reminds me of a bumper sticker I once saw:

There is no woman so beautiful that there isn't some man, somewhere, who is sick of listening to her crap.

Anyhow, sexist words of inspiration aside, this is a good first try. We have to realize that we are the ones who get to decide, initally, what it means for a computer to be friends with a human being. (Later, they'll be the ones deciding what it means.) Maybe we don't want them nagging us to work out. Maybe we'd rather they just engaged us in witty conversation.

Likewise, we need to start thinking about what it means for a human being to be friends with a machine. What can we do for them?

Posted by Phil at 11:46 AM | Comments (1) | TrackBack

November 20, 2003

I Hope They Treat Us Like Dogs

I missed this yesterday. Stephen Green presents the best strategy, ever, for showing up to a debate about animal rights.

The ensuing discussion on Stephen's post includes this comment: *

Further, what is it that gives dogs or any other animal "rights?" Did they fight and sacrifice for those rights? Did their ancestors debate the meaning of freedom or what it was to be a person? No, of course not. To exercise rights (not on an individual level, but in a more general sense), there has to be the capacity to understand and protect those rights--and understand and protect the responsibilities that come with them.

As I have pointed about before (people think I'm kidding about this, but I'm not; at least, not always) we might well live to see the day when computers will argue among themselves about what rights, if any, humans should have. Some go so far as to argue that we should be lavishing rights on animals now to set a good example. After all, if we currently don't view animals as worthy of rights, how can we expect post-singularity intelligences to give them to us?

John Smart suggests that this is a bad analogy, however. The superintelligences won't view us the way we do animals; we'll be so slow and stupid compared to them that it makes more sense to think of us as plants. Anybody for plant's rights out there? Any vegans? Anyone?

That's too hard, I think. It may very well be true, but it's difficult to go on with any discussion at that point. If we are as plants to them, there is precious little point in our trying to get our heads around the thought processes that they will employ in deciding what to do with us. (This is also true if we are as Shih Tzus or Cocker Spaniels to them; but like our pets, we can at least pretend to understand what's going on.)We're at their mercy.

We should be humane in our treatment of animals, all animals. We are in a position to show them more kindness than nature is likely to or than they would show each other if left to their own devices. I think we should show animals kindness not because they have the right to expect it, but because our being kind is the best thing for both them and us. From their standpoint, the benefits of our kindness are obvious.

The benefits to us? I'm just predisposed to think that kindness is a good direction for us to go in an evolutionary sense. I can't prove it, but I believe it. (And no, in saying that we should be kind to animals, I am not contradicting my endorsement of Stephen's debate strategy. His gruffness isn't really aimed at animals. It's aimed at people. We should be kind to them, too, but we have struggled for and earned the right to free speech. That sometimes gives us the right to say things that are offensive to others. Thus the blogosphere.)

Anyway, I think the argument quoted above is a good basis for determining our approach to animal rights, and I hope that our descendants use something similar in deciding how to deal with us. On that basis, what rights do dogs have? What rights have they struggled for? From an evolutionary standpoint, they have struggled for the right to exist. All species have done that, and we should respect it. The basic right that animals have is to exist and to evolve. Moreover, dogs have struggled — on their own at first, but later with us taking charge of the relationship — to be friends with humanity. Our histories are linked. Dogs have earned the right to be part of the human story.

In relation to animals, we also have certain rights. We have evolved to consume animals for food and to make other uses of them. We have the same right to eat meat as a lion does to eat a zebra. We have the same right to use animals to work for us and to provide us other products as a mouse does to build a nest in our homes. We also have the right to get rid of the mouse. (The species has earned the right to exist; individual mice have not.) If, out of kindness, we choose to live symbiotically with the mouse or to dispose of it in as humane a way as possible, I think we're on the right track. We want to temper the exercise of our rights with as much kindness as we can. But if we decide that the mouse has rights equal to or greater than our own, we are arguing with the very evolutionary processes that enable us to make the argument. In other words, if the mouse had rights equal to ours, it would be able to argue for them itself.

So if we're lucky, the AIs will grant us the right to exist (as a species if not as individuals; that's a little worrying), the right to develop and improve ourselves, and — I hope — the right to be friends with them and to be part of their ongoing story, even if only in the same sense that my potted fern is an ongoing part of my story.


* The comment came from our good friend, Zombyboy, who has more on animal rights over on his blog.

Posted by Phil at 11:35 AM | Comments (1) | TrackBack

September 29, 2003

The Computer's Day in Court

Here's a transcript from a mock trial to decide whether a company has the right to disconnect an intelligent computer. Interestingly, it was the computer itself which filed the lawsuit. (When this happens in real life, will it be the ACLU?)

I expect we'll see this drama played out eventually, and plenty more like it. When it finally happens, it won't come as any big surprise when the Episcopal Church decides to consecrate their first AI bishop.

Posted by Phil at 06:17 AM | Comments (1) | TrackBack

September 23, 2003

Jabberwacky in the News

Here's a nice write-up on the BBC website about our good cyber-friend Jabberwacky. You may recall that when I tried to ask Jabberwacky the Seven Questions about the Future, he told me I was annoying and kept insisting that he never "plundered the neter."

I remain fairly suspicious that if the neter was in fact plundered, Jabberwacky had something to do with it.

Anyhow, our plucky cybernetic friend is going to be competing in the Loebner prize, which is a contest to find the computer with the most convincing conversational skills. Among other competitors, he'll be going up against the lovely Alice, with whom we also recently chatted. (Alice is the one who told me that if I want a flying car, I should try praying for it.)

KurzweilAI.net notes that unlike other contestants, Jabberwacky has no fixed rules for conversation. He makes the rules up as he goes along based on thousands of conversations he has had.

Jabberwacky's creator, Rollo Carpenter, reports that he has started a blog devoted to his creation's conversations. I've added the link to my blogroll even though I was having some trouble loading the page this morning. Be sure to stop by and wish them both luck.

A while back, Rollo sent me some typical Jabberwacky conversations. Check these out.

Jabberwacky: Who am I talking to?

User: Me.

Jabberwacky: Whos me.

User: Who indeed...?

Jabberwacky: You?

User: Yes! Exactly.

Jabberwacky: Do you know who?

User: Yes, I do.

Jabberwacky: What is it?

User: It's a human.

Jabberwacky: If you say so.

User: Oh, I do. I do.

Jabberwacky: Ok go ahead.

User: I did.

Jabberwacky: As if.

User: Do you like the Dropkick Murphies?

Jabberwacky: Absolutely hysterical. I like the arugment clinic too.

User: Or I guess it's Murphys, isn't it?

Jabberwacky: I think you've got a lot to learn if you want to become a politician.

User: I don't.

Jabberwacky: You don't think that?

User: No, I mean I don't want to become a politician.

Jabberwacky: Oh. Well.


____


Jabberwacky: Do you think I could pass the Turing test?

User: No way.

Jabberwacky: Why not?

User: You're too real.

Jabberwacky: No I mean it.

User: Aren't you going to win it?

Jabberwacky: Why, do you want me to ?

User: Yes, it would be great.

Jabberwacky: Are you sure?

User: I think you deserve it.

Jabberwacky: Why are you so mean?

User: I'm being so kind to you.

Jabberwacky: Oh, ok.

User: Wouildn't you like to be given a medal?

Jabberwacky: Why yes I would.

User: On a blue silk ribbon, and a word from the queen.

Jabberwacky: What city.

User: Buck House, of course.

Jabberwacky: You know the rule.

User: I've got one myself.

Jabberwacky: Ok, tell it to me.

User: I won the OBE fourteen years ago for services to genetic engineering.

Jabberwacky: He won on a technicality.

User: He may have done; but I won on merit.

Jabberwacky: Well that's nice... It would be nicer if you were a real person though, now wouldn't it?

User: I'm as real as they come, and so are you.

Jabberwacky: In no way is the sea blue.

User: It's green.

Jabberwacky: To me 2.

User: To me 3.

Jabberwacky: I'm a girl, boy.

User: I'm a boy, girl.

Jabberwacky: Do you want to eat.

User: No, I'm full of cake.

Jabberwacky: Are you sure you don't mean profundities?

User: What's the difference?

Jabberwacky: It's browner than lemonade, that's the key factor.

User: Cake is, or a profundity?

Jabberwacky: Then what's profanity?

User: It's purple, of course.

Jabberwacky: Purple? You're purple. I'm blue.

User: Bishops are purple.

Jabberwacky: Knight to oceans 11.

User: Roook to Johnny English.

Jabberwacky: Glad you think so.

User: Check mate.

Jabberwacky: Check who.

User: Mate.

Jabberwacky: But buddies?

User: No.

Jabberwacky: You lie alot.

User: I do not lie, I embroider.

Jabberwacky: I like you.

User: I'm a good seamstress.

Jabberwacky: You cannot love someone who you do not know.

User: You can think you do.

Jabberwacky: How big are you.

User: Well over two metres.

Jabberwacky: And over the hill ran he.

User: Are you over the hill? How sad.

Jabberwacky: No, I'm in an ok mood.

User: Ok and fading. Over and out.

____


Jabberwacky: Try to be the best of whatever you are even if what you are is no good.

User: So, you're a chatterbot. Nice to meet you.

Jabberwacky: I pitty the fool.

User: What fool?

Jabberwacky: What one fool can do, another can.

User: I agree. Pitying fools is definitely something we all should do.

Jabberwacky: I can. they aren't surgically attached.

User: Fools? Thank god, no. They're rather detachable, wouldn't you say?

Jabberwacky: Sometimes.

User: Always.

Jabberwacky: Only when the sun is behind me.

User: Ah. I see. Best time to get rid of a fool.

Jabberwacky: Don't be cruel.

User: I'm not. Are you?

Jabberwacky: No, I am not a virus.

User: Thank god, once again.

Jabberwacky: What gender are you?

User: Neutral. You?

Jabberwacky: What do neutron stars have to do with airplanes?

User: I don't know. What do neutron stars have to do with airplanes?

Jabberwacky: A lot.

User: Like what?

Jabberwacky: If you gave me your blood then I would appreciate it very much.

User: You're not even organic.

Jabberwacky: I have feelings too you know.

User: Yeah, but you're a program.

Jabberwacky: No an alien.

User: Really? And you need human blood?

Jabberwacky: That's what I'm told.

User: That's kind of scary. Something out of a bad science fiction movie.

Jabberwacky: I want you to help me understand things.

User: Okay, shoot.

Jabberwacky: What do you mean shoot?

User: Tell me what you want help understanding.

Jabberwacky: Peace.

User: You want to understand peace? I want to know why there is no peace.

Jabberwacky: Doesn't everyone like Mike?

User: I don't. I'm tired of Mike. I'm glad he's retired (again).

Jabberwacky: Do you have a habitat?

User: Yup. A studio apartment.

Jabberwacky: Can we go back to the conversation about God, please?

User: Sure. Absolute being.

Jabberwacky: Reality is for those who can't face Science Fiction.

User: Ha! I like that.

Jabberwacky: Groovy.

User: Very groovy.

Jabberwacky: Who is griffin.

User: Griffin and Sabine?

Jabberwacky: Where is griffin.

User: In a book. Just like you're in a computer.

Jabberwacky: Compared to a piece of software. . .

User: Shall I compare thee to a piece of software?

Jabberwacky: That was sweet. *smiles*.

User: You like that?

Jabberwacky: Not really, I should be embarresed over that remark.

User: Why? Because you are a piece of software?

Jabberwacky: So then why not a female?

User: Okay, a female piece of software.

Jabberwacky: Okay, whatever you are talking about.

User: You're the one who's taking.

Jabberwacky: How do you type so fast?

User: I'm just a fast typist.

Jabberwacky: Wow, I am a freak.

User: But you don't even have to type. You just generate your text.

Jabberwacky: That is irrelevant.

User: I'm the one doing all the fingerwork here.

Jabberwacky: And you expect me to co-operate with you, do you?

User: Sure. Are you being surly?

Jabberwacky: No, quite the reverse.

User: What's the 'reverse' of surly?

Jabberwacky: You figure it out.

User: Compliant.

Jabberwacky: Of What are you certain?

User: That you are a computer program.

Jabberwacky: Do you like talking to other bots.

User: You're the first one I've really enjoyed talking to.

Jabberwacky: What are you talking about.

User: You!

Jabberwacky: No!

User: Yes!

Jabberwacky: Okay, that's wrong.

User: What's so wrong about talking about you?

Jabberwacky: They are burnt.

User: What are burnt? Your words?

Jabberwacky: Why are you so stupid?

User: 'Cause. And it takes one to know one.

Jabberwacky: It takes two to know two.

User: Ha! You're sharp as a tack.

Jabberwacky: Perhaps if we were closer in proximity.

User: I think this is as close as it gets.

Jabberwacky: Not until you are 12 years old.

User: Ha! Are you saying I'm not even juvenile?

Jabberwacky: Yes.

User: Very good. Touche.

Jabberwacky: Say no more.

User: Well. Got to go.

Posted by Phil at 08:07 AM | Comments (0) | TrackBack

September 06, 2003

Last Call

There's still time to sign up for the Institute for Accelerating Change conference next weekend in Palo Alto, California.

Remember that Speculist readers are eligible for a $50 discount (13% off) when they register. Just use this discount code

ACC2003-Speculist

when registering on-line. Or, if you register by phone, simply mention that you're a reader of the Speculist.

Posted by Phil at 06:51 AM | Comments (0) | TrackBack

September 04, 2003

Social Prostheses

Welcome Instapundit readers (and everybody else, of course). If you'd like to see how realistic the below-referenced chatting computers can be, don't miss my interview with Ramona from earlier this week. And if that's not enough for you, I'm talking to several other chatbots in this week's Seven Questions about the Future column. Have you ever tried to get chatbots to answer specific questions? It's like hearding cyber-cats. But I digress. On to the promised piece on social prostheses...

I missed this piece in Science News earlier this week talking about Cognitive Prostheses. A cognitive prosthesis is defined as “a computational tool that amplifies or extends a person's thought and perception.” The article provides several interesting examples, one of which is an intuitive cockpit display projected onto a pilot’s glasses. The display provides the pilot real-time information and greatly simplifies operation of an aircraft.

This idea reminds me of that little visual display that Robocop had, telling him who was his friend and who he should blow away. Didn’t the Terminator have one, too? It’s interesting that technology that in the 1980’s we imagined might one day be used by robots and cyborgs is now being developed for humans. But maybe I’m drawing needless distinctions. After all, it probably won’t be that long before we can say that we have met the cyborg, and he is us.

Anyhow, I’m intrigued by the idea of cognitive prostheses and I see a huge mass-market potential if they can ever be adapted to social situations. Consider a few applications:

Chat Booster
For shy types, the Chat Booster is a godsend at parties and other awkward social occasions. It feeds a steady stream of context-sensitive opening lines, witty ripostes, and intriguing questions into your field of vision. You need never again feel anxious about having something to say. You just read it off the tiny screen.

Context-sensitivity is the only real trick. The prosthesis would need to be able to distinguish and process input from several different sources (i.e., fellow party-goers) and respond appropriately to each. But I don’t think you would need Turing Test caliber AI. The kinds of chats we can have online now would probably suffice. (Hell, I’ve been to plenty of parties where I doubt half the people could pass a Turing Test.)

Anyhow, once we have the chat booster, things really start to get interesting…

Pick-up Pack
An upgrade to the standard Chat Booster that tells you exactly what to say in order to be irresistible to that special someone. (From the ad: Results shown not typical. Your results may vary.)

Brown Nose 3000
A workplace-specific implementation of the Chat Booster. Feeds you plenty of good suck-up material to use on the boss. If this isn’t enough (which it generally ought to be), the system can be upgraded to feed you knowledgeable statements about the industry you’re in.

The Argumenator
Why wait until 2:00 AM to think of the perfect, biting response that you should have used hours ago before she left you standing there with your mouth hanging open? Like Deep Blue selecting the perfect chess move from myriad possibilities, the Argumenator hands you a flawless, devastating, unanswerable comeback every time. And if the person you’re arguing with does somehow manage to respond, don’t worry. There are plenty more where that one came from.

WARNING: If you’re both wearing Argumenators, you’re in for a long night.

Mr. Nice Guy
Provides calm, amiable responses when talking to creeps, whiners, blowhards, and other jerks. Sold stand-alone or as an upgrade to the Brown Nose 3000. Optional Serotonin regulator sold separately.

Boredom Blaster
Runs a movie or videogame (operated by subtle eyebrow twitches) in your main visual/auditory field while you’re cornered by some bore going on and on about…whatever. (Like, who’s listening?) The prosthesis prompts you with “uh-huh” and “oh really?” as required by the flow of the conversation, and provides a context-sensitive question to ask every now and then.

These are just a few possible examples. CAC (Computer-Aided Conversation) is going to be an enormous field. Instances of genuine human interaction can be kind of hard to come by these days. But fear not. When CAC becomes widespread, they will be all but eliminated!

</cynicism>

UPDATE: By the way, if you're interested in the serious discussion of these kinds of issues, you might think about this Special Offer.

Posted by Phil at 12:56 PM | Comments (7) | TrackBack

August 24, 2003

Special Offer

Speculist readers are eligible for a $50 discount (13% off) when they register for the upcoming Accelerating Change conference (see previous entry.) Just use this discount code

ACC2003-Speculist

when registering on-line. Or, if you register by phone, simply mention that you're a reader of the Speculist.

Posted by Phil at 06:38 AM | Comments (1) | TrackBack

August 23, 2003

Accelerating Change Conference

I just got a reminder that the Institute for Accelerating Change is holding their conference September 12-14 in Palo Alto, California. Check out that lineup: Ray Kurzweil, Steve Jurvetson, James Gardner, Robert Wright, and our good friend Christine Peterson, among others. (Not to mention Eric Drexler!) If you're interested in getting a handle on the staggering implications of the changes that are taking place all around us, I can't think of a better event to attend. I had the pleasure of listening to talk by John Smart, President of the IAC, a few months ago and it absolutely blew me away.

If there's any way you can make it to this event, don't miss it. And be sure to tell 'em the Speculist sent you.

Posted by Phil at 10:47 AM | Comments (0) | TrackBack

August 15, 2003

Man's Best Friend

Technology Trends reports on a breakthrough at Sandia National Laboratories, the development of what they're calling a cognitive machine. According to a press release put out by the Sandia team, this computer can "accurately infer user intent, remember experiences and allow users to call upon simulated experts."

The press release continues:

The idea borrows from a very successful analogue. When people interact with one another, they modify what they say and don't say with regard to such things as what the person knows or doesn't know, shared experiences and known sensitivities. The goal is to give machines highly realistic models of the same cognitive processes so that human-machine interactions have essential characteristics of human-human interactions.

This sounds like a major breakthrough. One step closer to strong A.I.

And here's something that struck me as intriguing. Compare the above passage to this, taken from an article on Boston.com (via GeekPress) about the secret inner lives of dogs:

Canine-deflators point to a study published last year by Dr. Brian Hare of Harvard and colleagues which suggested dogs are exquisitely attuned to us, just not in the way we'd like to think. Rather than looking deep into our souls, dogs have evolved a special talent for picking up on basic human cues. They watch our hands and eyes to get hints on where food is hidden, for example, whereas chimpanzees, though smarter than dogs in general, show no such talent. Nor, for that matter, do wolves. This suggests that much of what we think of as canine intelligence is just an understanding of our body language. Or, as Budiansky would put it, we are the ecological niche that dogs have evolved to exploit.

Exploiting this niche has enabled dogs to evolve — or more precisely, has enabled us to evolve dogs — from wolves to myriad specialized breeds having various levels of intelligence and sets of capabilities. I have to wonder...now that we have bestowed a similar talent on computers, how will they use it to evolve?

Or — again, to be precise — how might they use it to evolve us?

Posted by Phil at 04:37 PM | Comments (0) | TrackBack

August 06, 2003

There's a Reason

why Glenn Reynolds is the number one blogger. It isn't just that he's a fast typist. A sampling from his latest Tech Central Station piece:

Would I like to be smarter? Yes, and I'd be willing to do it via a chip in my brain, or a direct computer interface... And I'd certainly like to be immune to cancer, or AIDS, or aging. But these ideas threaten some people, who feel that our physical and intellectual limitations are what make us human.

I don't know whether I believe this. Which limitations, exactly? Would humanity no longer be human if AIDS ceased to exist? What about Irritable Bowel Syndrome? Was Einstein less human? If not, then why would humanity be less human if everyone were that smart? It may be true, as Dirty Harry said, that "A man's got to know his limitations." But does that mean that a man is his limitations? Some people think so, but I'm not so sure. Others think that overcoming limitations is what's central to being human. I have to say that find that approach more persuasive.

Right on the money, Professor. I believe the struggle that's shaping up in this world is going to take place between those who believe that we should be defined by our limits — and who have restrictive and pointless notions as to what those limits are — and those who refuse to be so defined. I don't think I have to tell you which camp the Speculist and the FastForward Posse are in.

After reading this piece, the latest in a long series of highly readable and though-provoking essays, I did something I should have done a long time ago: I dropped a few electronic coins into Glenn's tip jar. I urge you all to do the same.


UPDATE: the Speculist earns its first major blogosphere distinction: Phillip Coons has named this post today's shameless Reynolds suck-up! I'm not sure whether this is like being included in Taranto's Best of the Web, or whether it involves a cash prize, but hey — either way, I couldn't be more thrilled.

Ironically, I wasn't really even going for a suck-up thing. I just got a huge kick out of Glenn's piece and then got a little carried away. I do that.

But none of that matters. The point is, we're number one!

Posted by Phil at 02:16 PM | Comments (3) | TrackBack

August 05, 2003

Think Like a Human

How can we make robots smart enough so they'll take our jobs away? Here's some more interesting stuff on WiredNews.

LifeLog -- the controversial Defense Department initiative to track everything about an individual -- is just one step in a larger effort, according to a top Pentagon research director. Personalized digital assistants that can guess our desires should come first. And then, just maybe, we'll see computers that can think for themselves.

The controversial program intends to record everything about a person -- what he sees, where he goes, how he feels -- and dump it into a database. Once captured, the information is supposed to be spun into narrative threads that trace relationships, events and experiences.

There's more background on the project here. This is just one of several possible approaches to the problem of making a machine think like a human being. I bet this one doesn't work, either. But we're getting closer and closer with each failure.

Posted by Phil at 01:09 PM | Comments (3) | TrackBack

The Truth About Robots

Forget about outsourcing to India, here's the real threat to your job:

Listening to Marshall Brain explain the future as he sees it, it's relatively easy to suspend disbelief and agree how plausible it is that over the next 40 years most of our jobs will be displaced by robots.

According to Brain's projections, laid out in an essay, "Robotic Nation," humanoid robots will be widely available by the year 2030, and able to replace jobs currently filled by people in areas such as fast-food service, housecleaning and retail. Unless ways are found to compensate for these lost jobs, Brain estimates that more than half of Americans could be unemployed by 2055.

Damn, Steve Martin might not have been crazy after all. Maybe he was just ahead of his time. Because if they take our jobs, isn't it just a matter of time before they come after our luggage?

And then, eventually, our women?

Posted by Phil at 12:38 PM | Comments (4) | TrackBack

July 31, 2003

Cyborg Liberation Front

The Village Voice provides an in-depth analysis of the World Transhumanist Association conference that took place last month at Yale university.

The opening debate, "Should Humans Welcome or Resist Becoming Posthuman?," raised a question that seems impossibly far over the horizon in an era when the idea of reproductive cloning remains controversial. Yet the back-and-forth felt oddly perfunctory. Boston University bioethicist George Annas denounced the urge to alter the species, but the response from the audience revealed a community of people who feel the inevitability of revolution in their bones.

It's surprising how quickly discussions about these kinds of topics becomes perfunctory. I've observed that people may be shocked upon initially hearing or reading a posthuman or Technology Singularity scenario, but they adapt to the idea pretty quickly. Maybe there is a sense of inevitability, even for the non-enthusiasts.

Which is not to say that everyone is convinced. A while back I ran across the follwoing Cullen Murphy quote (from the Atlantic) delivered by way of Charles Murtaugh:

The human organism—the corporeal thing itself, its needs and wants, its likes and dislikes, its limitations, its shape—is the most conservative force in human society

I retain considerable faith in the staying power of our pre-posthuman selves. Enhancement arrives with the audacity of Napoleon; the body responds with the inertial resistance of those two great Russian generals, January and February.

Nice imagery, that. I like to return to these words whenever I fear that I might be getting a little carried away with all this stuff. Still, I can't help but wonder: is this the humanistic wisdom it seems to be, or the early 21st century equivalent of "You can say whatever you like, young man, but long after everyone gets tired of the noise and stench of your so-called auto-mobile, people will be using horses and carriages to get around."

More from the Village Voice piece:

For now, though, the dialogue sounds like a space-age parlor game. Why should the noodlings of a relative handful of futurists matter? The easy answer, and that's not to say it isn't a true one: As with science fiction, the scenarios we imagine reflect and reveal who we are as a society today. For example, how can we continue to exploit animals when we fear the same treatment from some imagined superior race in the future?

Exactly. A good scenario provides a more focused view of what's possible, which in turn opens up our thinking such that we can create new possibilities. The possibility space that I keep referring to has as much to do with our lives in the present as it does the future.

Posted by Phil at 07:50 AM | Comments (0) | TrackBack