For centuries, the mind-body dichotomy has perplexed philosophers and scientists alike. Two recent movies, “Ghost in the Shell” and “Get Out,” in which the brain is removed from its original host and placed in either a robot, (“Ghost in the Shell”), or in a much younger man’s body (“Get Out”), beg the more serious query of what it means to be “us.”
Right before my mother passed away in a nursing home, she appeared as just a shell of her former self. Her vivacious and charming personality was tragically absent, replaced by a vacant stare.
I remember someone said, “She’s not with us.”
What is it that makes us who we are? Is it our memories, personalities, thought processes, or a combination of all three? Or, is it something else?
William James, a 19th century philosopher, psychologist and trained physician, said we are the sum of our personality traits. According to him, personality traits were “set like plaster” by the age of 30.
Psychologists originally looked at personality through the prism of five measurements: openness, conscientiousness, extroversion, agreeableness and emotional stability. Recently, they have theorized that there are additional ingredients to the biological and psychological recipe.
One of my favorite theoretical physicists, Michio Kaku, predicts that in the near future, we will be able to assemble what he calls, “quantum computers.” Quantum computers will be so advanced, that they will be able to duplicate the activity of a human brain. With every neuron and synapse of your brain replicated, this computer will in every imaginable way be “you.” Or would it?
The possibility of artificial intelligence has been enthusiastically debated by scientists and philosophers. With recent advances in MRI technology, abstract intellectual powers such as reason and emotion can now be seen and tracked. Many consider this to be a major breakthrough toward the creation of artificial intelligence.
One of my favorite philosophers, Hubert Dreyfus, author of the “What Computers Can’t Do” series, was a key opponent to the notion that artificial intelligence was even possible. He argued that duplication of a human brain in a computer is dependent on four false assumptions: the brain is analogous to computer hardware, and the mind is comparable to computer software; brain activity can be formalized mathematically into a series of predictive rules of law; and reality consists of a series of mutually independent (and measurable) atomic facts.
Dreyfus believed that we cannot ever truly understand our own behavior. A rock, a sponge —anything we see or touch — can be broken down to its molecular components. But our free will is a different story.
A computer might be able to be programmed with a series of responses, but without human-like consciousness, it can’t make impulsive, irrational, “in the moment” decisions, as humans are wont to do.
Reading Dreyfus, I couldn’t help but notice the overriding influence of another of my favorite philosophers, Martin Heidegger. Our modern expression of “being in the moment” can be traced to the heart of Heidegger’s most famous work, “Being and Time.”
Heidegger, like Rene Descartes before him, wanted to ground his philosophical ideas in experience. For Descartes, “cogito ergo sum” (I think therefore I am), was the affirmation that by the very act of thinking, he knew that he existed. Similarly, Heidegger used the German word, “Dasein,” or presence, to signify the original mode of human consciousness.
Heidegger argued that our existence, contrary to that of artificial intelligence, is a dynamic, “Dasein,” an open-ended project of “becoming.” For him, being alive is to constantly recreate ourselves through our actions and choices—a state of existence beyond the scope of any computer.
Dreyfus passed away two weeks ago, but his monumental contributions to the artificial intelligence debate live on. Hollywood, no doubt, will continue to entertain us with futuristic musings of machines-turned-human. Whether these ideas are capable of becoming reality will continue to be a subject of intense debate.