Your Thoughts Exactly: AI and Robots

Wednesday, January 05, 2005

 

AI and Robots

Last night I watched “I, Robot," best described by my friend Neel as “The Fresh Prince of Robots.” While the movie looked good, if standard, the acting and script were subpar, even for a Will Smith fan (of his acting, not his rapping.) However, I will give credit to "I, Robot" for doing something few films these days do: making me think. If Artificial Intelligence is the future, what will be the consequences? What should humans do to integrate machines into their lives?

Now I know less about A.I. capabilities than co-blogger K Lim, but I do know something about evolution, which was a central theme of “I, Robot”, that robots evolved from having no emotions or individual cognitive thought, either on their own or through the improved programming of humans. (Either the movie wasn’t clear enough, or I was distracted.) A “soul,” or “emotions,” is the key thing that separates humanity from machines in the science fiction genre, best personified by Data from ST: TNG (and if you don’t know what I’m talking about, you are just not cool enough.) Data is a superior being in strength, speed, and ability, whether from lifting an amount of weight to calculating two numbers together to analyzing a piece of classical music. Yet he is portrayed as incomplete, as searching, for an understanding of emotions, for the differences between and the humans he encounters.

To take a step back, however, it is clear that Data is the superior being. In the scale of hundreds of thousands of years, the emotions and ups and downs of humans do not matter. If you pick ten points on a time scale over the last million years, and attempt to explain the expansion of Homo sapiens sapiens as a species, you would look at factors such as adaptability, increased capability of survival, technological innovation (fire!). In the long term, machines, androids, whatever we create would have greater survival skills in terms of adapting to the environment around us. Thus humanity would seem to be at a large disadvantage on a Darwinian scale.

There are two other key factors, however, the drive to procreate and the potential for coexistence between man and Artificial Intelligence. Humans greatest strength over machines is an evolutional mechanism, refined over billions of years, to force procreation; a combination of sex drive, love, and attachment to children. No matter how many digits of pi a supercomputer can calculate, it doesn’t want to go out, find another computer, fuck it, settle down, and raise a few hand-held PCs.

Would intelligent or self-aware machines all of a sudden incorporate this trait? Can a machine desire companionship, control, love, and most importantly, the continuation beyond its existence of beings like itself? In most science fiction, it is assumed that the creators of AI would humanize their creations, whether through naming individuals or, as in the case of “I, Robot” creating robots with faces that look like humans. We may yet do that, but I think where science fiction gets it wrong is in having the intelligent machines take on important biological traits. In futuristic depictions, self-aware machines necessarily group together and attack humanity. If a machine becomes self-aware, it may not necessarily have the programming (in our case, biological programming) to procreate, or to even group together as a faux species. After all, humans’ procreative programming far predated our self-awareness or language. Having been in existence for billions of years, I cannot imagine humans effectively replicating this evolutionary force. Moreover, if humanity does not pose a threat to the machines’ continued existence, there is no reason for the machines to wipe out humanity for simply sadistic reasons. That is a fantasy solution. However, as creators of said machines, it is unlikely humans would accept coexistence with a machine race in any relationship other than servitude. Thus the future existence of the machines could be threatened, forcing them to compete with us.

The other issue that “I, Robot” explores is whether a robot or form of artificial intelligence can have a soul. In this movie, the soul is portrayed as having to do with emotions, and with the preeminence of certain values above rational outcomes in terms of making decisions. In many ways, futuristic explorations of other beings with souls, (even if they are created by us) are really ignoring the central questions: what makes humans different than any other living thing? Do we have “souls?” Do our emotions set us apart or make us special in any way? If we do have souls, when did we get them? Or are they simply creations of cognitive beings attempting to explain their existence?

My central point is this: in creating artificial life forms that combine the emotive, irrational advantages of humans with the mechanical superiority of a machine, we are playing with fire. We cannot predict what will happen because we do not adequately understand ourselves and what a “soul” is, whether it is a function of our large brains, our language, a supernatural force, or something we haven’t even thought of. Thus we must exercise extreme caution as we go down the road to intelligent mechanical life. Unfortunately, it only takes one group of scientists to take us to the point of no return. I am not saying we should forgo the benefits that come from technological innovation. I’m just saying we should step back and think if we really know what we are doing.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?