Your Thoughts Exactly: How I learned to stop worrying and love the robots

Thursday, January 06, 2005

 

How I learned to stop worrying and love the robots

The article above links to a short book, "The Artilect War". It's an interesting read; you don't have to read it, it is a bit long winded and more than a little self-indulgent. But the main point of the argument is 4-fold: 1) Robots will eventually come to a point where they surpass human intelligence, 2) When this happens we cannot know what they will do, 3) They may decide to kill off the human race, and 4) We must decide whether we want to proceed.

If you accept 1, then 2 and 4 are pretty easy to accept. And if you accept 2, then 3 follows. And he lays out a pretty good argument for 1, although I believe he is off by an order of magnitude in the timeframe. (I think we're still 100+ years away from robotic sentience. He lays out evidence like Moore's Law, which I think is soon going to fail.) Also, if you accept that we can build sentient robots, then from there, they will most likely increase their intelligence much, much, much faster than humans. Anyway, these arguments are merely details. But Marmar (scroll down for his post!) renewed my interested in the subject. What do we want to do with these robots?

de Garis (the author) pitches the debate as a Cosmist vs. Terran battle. And people must choose sides, because it's not a debate with much middle ground; either we build these robots, or we don't. Cosmists believe that we should, Terrans believe that the risks are too great. I don't want to rehash all the arguments that he made, but I will say why I believe what I believe.

I think I'm a Cosmist. The main reason I would be willing to build these robots is that I think that the human race is doomed, like every species is, like this planet is. Eventually we will die off; our bodies were not designed to live in outer space, nor were they designed to live on planets with high gravity, or without oxygen. We may be a tremendously adaptable species here on Earth, but outside of this little blue jar, we aren't shit.

I think, that it would be a tremendous waste, for humans to pass up the chance to build something that can leave the Earth, explore the universe, and perhaps be something permanent and immortal (at least till the universe itself ends... but maybe these things could figure that out too!). If you accept that we will die out, then it is almost no jump at all to accept that we should build these artilects. Yes, there's a chance they could kill us, and there may be no reason to hasten it, but how can we ever know when we are on the brink of extinction? By then it may be too late and our one chance to build these things will have passed.

There is one other thing: if you have accepted that humanity is doomed, then building these robots offers some hope as well. Yes, they might wipe us out, or they might ignore us, but there is a chance they could help us too. And that could be worth it as well.

I also want to address Marmar's comments on the existence of a soul, irrationality, and the human existence. I really shouldn't get into any conversation that talks about 'souls' but I think the idea is bunk; just human hubris, much like the American-centric pride that I've criticized before. Somehow we humans think we are special; not just special; but divine creatures. Isn't it enough that we are self-aware, able to make decisions based on a variety of emotions and stimuli? Why does the soul have to enter into the equation? Why does it have to be god-given?

In terms of irrationality, I think there are great things about human irrationality. Of course, I'm a human, so it's not that surprising that I think that. Love, sacrifice, humor, happiness: emotions in general are incredibly powerful things. And yes, in many ways I would pity a robot that couldn't feel them. But couldn't that robot pity me for having to sleep 8 hours a day, or only being able to keep a thought in my head for minutes at a time? Or only being able to make vague decisions based on totally incomplete data? The human experience is not the only way to look at life and existence. All I'm trying to say is that we are special. We're just not divine, perfect creatures. Different is good.

I would like to leave with one last thought to chew on. When scientists observe ants, cells that make up the brain, and other social orders, they sometimes see something called emergent intelligence. Basically, it's the ability of an organization to have an intelligence that no single individual has (or even could have). In ants, it's the ability to solve problems (some pretty complex problems, I might add), distribute work, and impose order on these rather unintelligent ants. In human brains, it's language, sentience, and rationality, arising out of simple electrical neurons. We look at these things and are amazed at how such simple things can give rise to such incredible, wondrous products. Maybe, just maybe, the way humans can create emergent intelligence is to create something that transcends us.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?