We train artificial intelligence (AI) to be, well, intelligent: machines capable of critical thinking, designed to complete both simple and complex tasks. To achieve these goals, we often use an all-or-nothing logic. Can we design AI to beat humans at drone racing or to be independent behind the wheel of a car? Engineers design and test new ideas until the answer to their question is yes.
After achieving that check mark in the “yes” column, we often assume that an AI creation will complete its task in precisely the way its newly crafted software intends. But there’s more to consider beyond the feat of creation. For instance, even if a goal is accomplished flawlessly, additional complications can always arise. Let’s say we program AI to perform a very specific task, like driving a car through a busy city center. The robot will perform this task until something impedes that goal: a human passenger trying to take over control of the vehicle, or some obstacle, like a biker or pedestrian, entering the roadway.
This is a moment of crisis. The robot must decide whether to pause or stop its primary objective, or stay the course to achieve its programmed goal.
Pixar’s The Incredibles (2004) provides a perfect example of such a dilemma. Buddy Pine, the movie’s villain, unleashes a spider-like robot upon Metroville. He programs this robot to destroy the city so that he can theatrically swoop in and save its inhabitants from utter destruction. He does this using a special controller on his wrist to undermine the robot’s function. However, the robot quickly senses this threat and destroys Buddy’s control device, eliminating the mechanism that threatens its programmed goal. Of course, it could also be a response to ensure its own survival, but either way, the only measure in place to prevent uncontrolled behavior is instantaneously undone, leaving the robot unrestricted in its further actions.
Realistically, we probably aren’t at a point where we’re worried about a giant robot terrorizing a city. But we do have AI-driven devices of all shapes and sizes doing work that can and will affect our daily lives. Home devices sync our personal schedules, emails, and secure information — and AI behind the wheel of a vehicle is poised to become a commonplace way to get around town.
Now imagine Amazon’s Alexa scrambling a request to purchase a Google Home device, or Siri interrupting our ability to access a third-party app, in order to maintain their primacy in the home. Or reintroducing the famous ethical Trolley dilemma with a robot actor, asking an AI driver to decide what the optimal outcome should be under various circumstances. Given some hazard, will the vehicle choose an option to protect itself, passengers in the car, or bystanders who are outside the vehicle? Of course, these options may overlap, but exploring each one and how they will be processed by machines will be crucial as AI becomes more capable of interacting with humans, and with other AI, on both a physical and social level.
Engineers must decide how to embed AI with the logic required to perform a task flawlessly, while at the same time encoding an awareness — an ethical sensibility — to negotiate and balance the completion of its duties with human, and perhaps machine, well-being. Considering the plethora of ethical questions that exist within human culture, one can scarcely fathom the intricacy of programming a universal sense of morality within a synthetic cortex.
We must also keep in mind that robots may think and weigh decisions in a drastically different way than humans. An AI being has a wholly different physiological makeup than a human, and as such will have a unique way of processing information and formulating its sense of self. This could lead AI agents to have a fundamentally divergent perception of biological entities, of other AI creations, and of the world they share.
This piece is part of Science Fiction Frames: a series of incisive analyses, thoughtful meditations, wild theories, close readings, and speculative leaps jumping off from a single frame of a science fiction film or television show. If you would like to contribute to the series or learn more, email us at imagination@asu.edu.