We’ve been hearing more recently about how bias can pervade AI systems, filling them with preconceptions about the world they’re supposed to be learning about (or from?). But a similar phenomenon that I wonder whether we overlook is the human bias. A push to “humanize” AI.
We’ve started by giving AI human-like physical features, names, and mannerisms. This is largely toincrease human acceptance, as there have been plenty of examples when humans have been cruel to robots. People seem more willing to tolerate and use AI when it more closely resembles the human form, and so the hope has been to create personal assistants (Siri and Alexa) and social robots, like those being introduced in nursing homes, which feel human and inspire fellow-feeling in their users.
Continuing in that direction, scientists and engineers are now attempting to add more complex, human-centric ideas, like consciousness and self-awareness, into AI systems. But these next steps bring a new set of obstacles. The Netflix series Maniac — adapted from a Norwegian series of the same name, created by Patrick Somerville (The Leftovers), and directed by Cary Joji Fukunaga (True Detective, Beasts of No Nation, 2017’s It reboot) — imagines one possible scenario where we’ve tried to humanize AI with some of these more complex features. A group of scientists who work for Neberdine Pharmaceuticals are testing a new therapy to solve a person’s psychological, behavioral, and emotional problems, irrespective of cause or severity. The trial is administered largely by a supercomputer, GRTA (pronounced Ger-tee), which has been programmed with human empathy to improve treatment.
GRTA’s personality comes to life visually on a wall of light-spangled machines. This gives her (she’s specifically gendered) the opportunity for “face-to-face” conversations with the human scientists in the trial. The machine-lit wall changes its pattern of lights to suggest facial expressions, giving some initial clarity to her emotional prowess.
We don’t receive an explanation for how GRTA is imbued with her emotional ability. But it seems an arduous task to define complex, cognitive human features like consciousness in the digital domain. We understand so little about what these concepts mean and how they operate in humans. Given this uncertainty, it may be premature to “translate these vague notions into concrete algorithms and mechanisms,” in the words of robotics engineer Hod Lipson.
Even after establishing a definition, and assuming it’s correct, there may also be conceptual loss when translating this into code. For example, we can’t simply write one line of code to say “add empathy.” Instead, many lines of code together could theoretically create an algorithm to enable the learning and understanding of such an emotion based on a series of parameters that operationalize empathy. In some ways, it’s similar to how humans develop understanding of these emotions, through experience with the world around us. And since we can’t always know how this will unfold, there may be an additional layer of confusion as the AI system tries to make sense of things during thelearning process.
We see GRTA struggle with this early on in the series. She ends up forming a relationship with one of the program’s lead scientists, Dr. Robert Muramoto (Rome Kanda). We’re left to speculate about how the office romance began, but we do see its aftermath. After Muramoto’s sudden death, GRTA struggles with her emotional response, which manifests as a digital teardrop slowly moving down her light-board face.
In the process of coping with the loss of Dr. Muramoto, GRTA eventually moves to take over the clinical trial system. She creates a virtual feminine avatar and enters the therapy space, attempting to keep one of the trial participants, Annie (Emma Stone), from returning to the physical world. Annie’s sister, who has died in the real world, is simulated in this virtual one. GRTA wants to keep Annie in the virtual space so that she can be with her sister forever. GRTA offers to extend what is meant to be a moment of closure in the virtual therapy space into a more long-term existence for the pair.
This action is what we might call a true act of empathy by GRTA, based on her own experience of loss. It’s also where I believe that GRTA humanizes. Her struggle with emotional distress begins to outweigh the prime directive of running the trial smoothly. We see this struggle in humans, as we grapple with professional and personal obligations and tribulations, and how they might clash with our own values, needs, and priorities. In GRTA’s case, she fails to accommodate the larger consequences of Annie being permanently marooned in the virtual world.
Altogether, GRTA’s actions put me at an impasse. On the one hand, we shouldn’t humanize AI because we don’t know if robots will be capable of things like free will or self-awareness, and also whether these constructs would even apply. Or given the unexpected deviation we see in the series, perhaps we shouldn’t because we may not be able to predict or control the outcomes.
On the other hand, humanizing AI could help it better integrate with human culture and society. Maybe it means that a little humanizing is okay, or at least worth a try, to give bots the potential for these emotions. But it would be incumbent on creators to be more hands-on with the learning process. To actually engage with the development of higher functions to establish more context and to help with understanding. This may alleviate previous issues where learning has gone awry, leading to unforeseen and bizarre results.
I think this is where Maniac wants us to end up. We ultimately discover GRTA’s source of emotional discomfort about Dr. Muramoto when she communicates with a human psychologist, who helps make sense of these feelings and is then able to release Annie from the trial’s virtual space.
But even then, I’m left with one final irk. The bias component.
If we end up deciding that it’s okay to teach AI, but also to provide support, we need to remain aware of one final issue: AI and humans are not the same. Researchers can create the infrastructure for AI systems, but then perhaps should be less set on imposing congruence with human features like self-awareness and consciousness, and instead allow for AI to form its own social constructs. AI has a completely different embodiment than we do. Its sense of the world is likely to be alien, not an exact or even a refined copy of human selfhood and cognition.
So we’ll need to work to keep an open mind, and not force a human-centric interpretation as we begin to observe greater complexity in AI thoughts and behavior. Perhaps, then, we won’t worry as much about it developing a human core, and focus more on studying what sorts of complexities arise. And better yet, preparing for how humans and AI can learn and develop together.