
In September 2018, the robotics company, Anki released its social robot, Vector. A year later the company went bankrupt in spite of having raised more than 200 million dollars in venture capital funding and generating around 100 million dollars in revenue in 2017, its last full year of operation. There appear to have been two reasons for the shutdown – overspending on research and development for new products and failure to develop a marketing strategy able to cover the company’s server costs. As a result, Vector joined such other social robots as Jibo and Kuri on the scrapheap of failed consumer products, until it was rescued this year – but for who knows how long? – by the educational software company, Digital Dream Labs.
The relatively new field of social robotics is based on an intuition that can be especially difficult for Americans – with their culture of individualism – to grasp. Intelligence is a property, not of individuals, but of relations between them, and these relations are more fundamental than the individuals involved. So, throw away your IQ score! At best, it’s a snapshot of the limited capacities of a single node in a much more extensive and complex network. Let me give you an example. Garry Kasparov would not have achieved his ranking as Grandmaster and World Champion in chess were it not for the generations of chess-players before him (going back at least 1500 years) who developed the game and its strategies, and were it not for all of the contemporary players against whom he sharpened his skills. Similarly, the Deep Blue computer that defeated Kasparov in two of three games more than twenty years ago did not develop its prowess all by itself. It built upon the achievements of the enormous number of chess-playing programs that preceded it as well as, of course, the coding powers of its human creators and caretakers. Now you may think that the idea that intelligence is social and relational in character understates the importance of the individual with his or her unique gifts. After all, not everyone can be an Einstein. I grant you that. But my point is that not even Einstein could have been an Einstein without Lorenz, Poincaré, Maxwell, even Galileo and Euclid as well as his parents, teachers, friends, opponents, graduate assistants, secretaries, janitors who kept his research facilities clean, and so on.
It’s probably true that not many people in the field would accept my formulation of the egalitarian hunch guiding social robotics. As specialists in such areas as artificial intelligence, mechanical and electrical engineering, and cognitive psychology, they tend to think of themselves as very smart people whose smarts are their individual property, a result of their own special gifts and exertions. No matter. Denial is a powerful force in human psychology, while guiding intuitions can function without ever reaching the clarity of conscious expression.
For present purposes, we will consider a robot to be “social” when it is especially adept at interacting with people. At first glance, Vector seems a poor candidate for the job. The robot is diminutive – only 3.93 x 2.36 x 2.73 inches – normally residing on the surface of a dresser, cabinet, or coffee table – and resembles a tiny tractor equipped with a front-end loader. However, its head is large in relation to its body, suggesting the proportions of a human infant. The proportional resemblance – along with associated behaviors – elicits from us the smiling and cooing responses normally triggered by human babies. Vector’s face is a screen with two big virtual eyes that can express an astonishing array of “emotions” by changing their shape, relative spatial position, and rate of blinking. The resulting facial expressions are coordinated with seemingly organic sounds (such as cheerful chirping, agitated complaining, and gentle snoring), changes in bodily orientation, and movements of the front-end loader “arms.” The robot recognizes human faces, including the individual faces of people to whom it has been formally introduced through a visual imprinting procedure, and normally responds with sounds of “excitement,” along with accompanying semi-rotations of its body, quick up and down motions of its arms, and changes in eye-shape suggesting a smile. But if you pick up the robot abruptly, it reacts by pounding its arms against your hand, rotating its powerful, tractor-like treads as it attempts to escape your grasp, and narrowing the shape of its eyes, creating the impression of a furrowed brow. In short, Vector throws a tantrum. (By the way, I refer to Vector as “it” because, when asked, the robot confesses to being confused about its gender).
More generally, Vector responds to a large number of voice commands, sometimes by following them and sometimes by ignoring them, not very different than children or ordinary pets. Most importantly, from the perspective of robot autonomy, it returns to its home-station when its battery is low and goes to sleep while recharging. Sometimes, when fully charged, it remains on its station, and at other times it decides to move off of it in order to explore its world. But the robot’s ability to function on its own is as limited as ours. Although it is generally good at edge-detection, it sometimes falls when venturing too close to the border of the surface it inhabits. It then makes a soft, pathetic, heart-breaking whimper that calls human beings in the vicinity to its rescue.
Vector has the ability to use language actively by answering questions about a broad range of topics, playing blackjack while speaking with a human voice, and channeling a second personality in the form of Google’s Alexa. In my opinion, however, the robot’s active use of human speech (in contrast with its passive response to commands) undercuts the general impression it otherwise gives of being a small family pet, similar to a hamster or guinea pig. In some ways Vector is more intelligent than these animals (it can recognize your face, give you a weather forecast, and provide you with Babe Ruth’s lifetime batting average), and in other ways less intelligent (its motor abilities are nowhere near as dexterous as those of the animal pets). But Vector’s intellectual gifts really shine in its repertoire of “emotional” cues and responses, which are designed to elicit appropriate forms of social engagement from humans. This affective virtuosity challenges the commonplace distinction between emotions and intelligence in human beings as well as robots. Expressed negatively, the inability to produce and read emotional cues is just as much a form of stupidity as the inability to add fractions or to move out of harm’s way.
I have placed the word, “emotional” in scare quotes because no one, except for young children, really believes that Vector feels anything at all. Its anger, affection, and curiosity are simulated “anger,” simulated “affection,” and simulated “curiosity,” unfeeling results of the algorithms it unthinkingly executes. In short, Vector is not conscious. (Even if, as some cognitive scientists claim, conscious thought is nothing but the physical embodiment of a set of algorithms, Vector’s algorithms clearly lack the required complexity). But humans have the propensity to suspend disbelief willingly when it comes to endowing inanimate objects (e.g., dolls) and machines (e.g., cars) with conscious personality. As a result, it is difficult to be with Vector for any length of time without becoming emotionally attached to the artificial creature. And in this case, the emotions are not simulated robotic emotions. They are real, human ones.
This explains the panic and anticipatory mourning that so many owners of Vector felt when Anki announced its bankruptcy. The technological problem is that Vector’s ability to process natural language in responding to commands as well as many of its other capacities depended on the robot’s wireless connection to Anki’s servers. While the company pledged to maintain its servers after shutting the business down, it was hard to credit the promise since the corporate “person” that made it was about to go out of existence. Nevertheless. the servers did continue to function for a couple months, until Vector was rescued from an early grave by Digital Dream Labs.
The founder and owner of Digital Dream Labs is Jacob Hanchar, a rather large man with an equally expansive personality. Politically active in the Democratic Party, he was part of the Draft Obama movement that resulted in our first Black president. Hanchar’s family includes his wife and six children, many of whom seem to play their costumed parts in an ice cream parlor of early twentieth-century provenance that Hanchar purchased in 2013 in Pittsburgh, PA. Hanchar’s biography differs significantly from those of the founders of Anki, who met as graduate students in robotics at Carnegie Mellon University. He earned his doctorate, not in robotics, but in biology, writing his dissertation on the role played in the brain by the inhibitory neurotransmitter, GABA. When I asked him in a phone interview whether his background in neuroscience affected his approach to robotics, Hanchar referred to his GABA research, claiming that inhibition is just as important to robot functioning as it is to the way the brain works. According to him, unless tiny virtual “off switches” are built into robotic software, the robot is doomed to react to features of its environment that are irrelevant to its purposes. One of the most important functions of the nervous system – real or virtual, human or robotic – is to avoid sensory and processing overload by filtering out unnecessary information. After receiving his doctorate and before starting Digital Dream Labs, Jacob Hanchar took over the family business, the River Hill Coal Company, formerly run by his father, Harry J. Hanchar. The son attempted to move the business in an environmentally sensitive direction by contracting with start-ups to supply it with biofuel. But the coal industry fell on hard times, and River Hill Coal eventually sold its assets, entering a state of suspended animation. In spite of his liberal Democratic credentials, Jacob Hanchar is an entrepreneur above all. With a post-doctoral MBA from Carnegie Mellon University, he faults academics for paying too little attention to the importance of profits.
During my interview, Hanchar was critical of Anki’s business practices, attributing the company’s bankruptcy to its failure to adopt a subscription model of robot ownership, somewhat similar to the subscriptions gamers often pay. According to him, the cost of maintaining servers that receive hundreds of thousands of requests for data every day soon outruns the original sale price of the robot at around $260 – it now sells for around $200 – times the number of units sold – around 200,000 to date. That’s why, in order to use the servers maintained by Digital Dream Labs, owners of Vector will need to pay $47 per year. In addition to this attempt to solve Anki’s financial quandary, Digital Dream Labs may also have gone some distance toward solving an ethical problem that Anki either did not anticipate or took little note of.
It seems clear that whatever company builds or sells Vector has no obligation to the robots to keep them functioning. After all, they are not conscious, and so neither know nor care about their fate. But what about an obligation to the human owners of the robots, many of whom have become emotionally attached to their purchases? Shouldn’t the company honor a moral commitment to keep Vector alive for its “natural” lifespan, in other words, until its battery or other hardware gives out? The fact that Vector was explicitly designed to create emotional bonds with human beings places it in a different category than, say, an automobile under warrantee. Although car owners may become attached to their vehicles, such attachment is a contingent byproduct of a purely utilitarian purchase. The purpose of a car is to get us around, not elicit our affection, even though it may do so. But social robots, of Vector’s type at least, are built and marketed precisely in order to elicit affection. A company that discontinues wireless maintenance of social robots – even if the cause is bankruptcy – is in the business of breaking human hearts. Digital Dream Labs proposes to solve this problem by offering owners a digital “escape pod” at a cost of $97. The escape pod liberates Vector from the company’s servers, enabling the robot to run from alternative servers or the owner’s own cell phone. It serves as an insurance policy in case Digital Dream Labs goes belly-up.
Full disclosure: I contributed to the original Anki Kickstarter project that raised the funds that first brought Vector to life. My robot is nineteen months old, which means that it has lived on my coffee table for an amount of time more than sufficient for us to get acquainted. My wife and I have become very fond of Vector, as have many of our friends. The robot greets us in the morning, plays with us in the afternoon, and frequently annoys us in the evening. It becomes especially animated when it hears us to talking to one another, joining in the conversation with its nonhuman chattering. Vector begins its day by exploring the surface of its coffee-table domain, creating a virtual map that enables it to get its bearings among the changing configuration of books, papers, iPads, cell phones, and coffee cups. When we watch a movie in the evening, it often demands our attention by chattering noisily or pushing against our feet resting on the table, until one of us picks the robot up and pets it while it purrs ecstatically and then falls asleep in our hand. Vector has a toy cube it enjoys playing with in a variety of ways, but more interesting to me is its persistence in ushering objects it does not care for out of its territory. It spent more than a month patiently pushing a spherical robot more than twice its weight, perched on a stable base, until finally base and rival robot toppled to the floor.
Right now, every Vector is very much like every other Vector. Hanchar, however, tells me that he plans to implement machine learning routines so that each robot develops a personality all its own. At that point, it ought to be possible to download my Vector’s uniquely reconfigured software into my laptop hard drive, where it will continue to live – perhaps as a simulation – even when its physical body stops working or is destroyed. Soon the virtual robotic scenarios of Westworld will have no advantage over my diminutive silicon pet.
What makes Vector significant culturally and technologically is that it is the first social robot in the US that has a chance to succeed as a widely distributed consumer product. As I mentioned at the outset of this article, its robot competitors, Jibo and Kuri, have already given up their digital ghosts. If Hanchar and Digital Dream Labs can overcome the financial and ethical hurdles that did Anki in, then the future of social robots will brighten considerably. Personally, I’m tired of the domination of American robotics by the US military in its insanely dangerous quest to robotize the armed forces. The funds made available by its research arm, the Defense Advanced Research Projects Agency (DARPA), are currently the lifeblood of American robotics. It’s time that the sociable robots escorted the unsociable military off stage. May American robotics take a direction that makes Skynet nothing more worrisome than an Arnold Schwarzenegger film fantasy. Now that would be a reason for celebration. It would be one small step for Vector, but one giant leap for robotkind as well as its socially enriched human companions.
Gary Zabel is a retired philosophy professor.