2017 IS POISED to be the year of the robot assistant. If you’re in the market, you’ll have plenty to choose from. Some look like descendants of Honda’s Asimo—shiny white bots with heads, eyes, arms, and legs. Ubtech’s Lynx has elbows, knees, and hands, which it can use to teach you yoga poses, of all things, while Hanson Robotics’ Sophia botapproaches Ex Machina-levels of believability. Others, like Amazon’s Alexa and Google Home, have no form. They come baked into simple speakers and desktop appliances. It seems most robot helpers take one of these two shapes: humanoid or monolithic.
Yet, a middle ground is emerging—one with just a hint of anthropomorphism. LG’s Alexa-powered Hub robot has a “body” with a gently nipped in “waist,” and a screen with two blinking eyes. ElliQ, a tabletop robot assistant for the elderly that debuted last week at the Design Museum in London, features an hourglass-shaped “body” and a “head” that swivels. Kuri, a penguin-like helper from Mayfield Robotics, scoots around and looks at you but doesn’t speak. This is all deliberate. Designers and roboticists say a suggestion, rather than a declaration, of anthropomorphism could help people form closer connections with their robot assistants.
But don’t overdo it—the more like C-3PO your robot looks, the greater the risk of disappointment. “The Hollywood robot comes with a lot of baggage,” says Shyam Sundar, founding director of the Media Effects Research Lab at Penn State. Sundar and his team study people’s reception of social robots, and found that robots that appear too human—not in a creepy, uncanny-valley sort of way, but in their abilities—inevitably encourage unrealistic expectations. Most of them, for example, can’t open a door, or help you unpack groceries, but arms and a sense of awareness would suggest they can. “The critical thing for a robot assistant is information access, being a butler who fetches information or who places a phone call,” Sundar says. “For those things, human morphology is not only irrelevant, it’s also distracting.”
Amazon Echo’s featureless, cylindrical form factor is actually an advantage. The device is so plain it all but dissolves from view, letting Alexa—the AI of the operation—work. Indeed, Alexa doesn’t even need Echo to do its job; the voice assistant will soon inhabit a variety of third-party devices, from speakers to refrigerators. The Echo, and any other product Alexa animates, is merely a vessel.
But some human traits could be a good thing. “If you get the balance right, people will like interacting with the robot, and will stop using it as a device and start using it as a social being,” says Kate Darling, who studies human-robot interactions at MIT’s Media Lab.
ElliQ, for instance, emotes by bobbing and rotating her “head.” Israeli startup Intuition Robotics created her for older users, to help connect them to their families and engage in activities around the home. This range of motion, coupled with a female voice that CEO Dor Skuler says adapts to user preferences, constitutes her character. His team took pains to avoid giving ElliQ humanoid features, but the result was far from expressionless. “It looks like it has a face even though it doesn’t,” says usability guru Don Norman, who advised Skuler on ElliQ’s design. “That makes it feel approachable.”
Too many of these characteristics leads to anthropomorphic overload. Kuri’s designers tried to avoid this by programming him to emit tones instead of words. He has eyes, but not a mouth—that would be creepy, say Josh Morenstein and Nick Cronan, co-founders of Branch Creative. Mayfield Robotics told them that their chief objective was to make Kuri adorable. So Morenstein and Cronan worked with a Pixar animator to hone the shape and angle of Kuri’s eyes. “Just by moving things a few millimeters, it went from looking like a dumb robot to a curious robot to a mean robot,” Cronan says. “It became a discussion of, how do we make something that’s always looking optimistic and open to listen to you?” Same goes for Kuri’s drawn-on “shoulders,” which angle forward to suggest attentiveness.
In other words: A little bit goes a long way. “We seem to be biologically hardwired to anthropomorphize anything,” Darling says. Even giving an inanimate object a name can make a human feel attached to it, she says. “It’s a very easy thing to harness. The question is just, how well are you harnessing that?”
The other question: For whom are designers harnessing it? The right amount of anthropomorphism could encourage more or different kinds of interaction from users. Those interactions strengthen the robot’s intelligence, which benefits manufacturers, but maybe not all users. Darling points to the military. When soldiers develop a pet-like fondness for a robot, or see the robot as an extension of themselves, it can influence split-second decisions in the field. That could prove inefficient, or even dangerous. At the same time, those feelings of sympathy could improve the quality of life for someone with a failing memory.
In the months and years ahead, social robots will only become more commonplace. But what they’ll look like, and how they’ll behave, has yet to crystallize. “What do we think a robot is?” Norman says. “Some people think it should look like an animal or a person, and it should move around. Or it just has to be smart, sense the environment, and have motors and controllers.” The answer might lie somewhere in between.
Article and Photos by: Margaret Rhodes | Wired