top of page

The Uncanny Valley

  • Writer: Laurent Bouvier
    Laurent Bouvier
  • Sep 28
  • 2 min read

In 1970, Masahiro Mori, a professor of engineering in Tokyo, published a seminal essay about human responses to robotlike forms.

 

Mr. Mori proposed that the increasing humanlikeness of a robotic agent initially elicits positive responses from humans. However, beyond a certain point, when the robot becomes almost human, humanlikeness leads to a decline in positive responses and, eventually, to negative responses to the non-human agent. He named that dip in affinity the ‘uncanny valley’ (‘UV’)(explanatory graph here).

 

The UV has obvious implications for artificial intelligence and robotics: a deliberately less humanlike design, such as a stylized avatar or a synthetic voice, is more likely to produce a quality human-robot interaction than a closely resembling, yet imperfect humanlike agent.

 

An explanation comes from predictive processing in neuroscience. In Be More Bayesian(2023), it was explained that the human mind assesses reality by assigning probabilities to hypotheses based on sensory cues and by continually updating these hypotheses as it receives incremental sensory data. The brain’s objective is to minimize the prediction error between cues and reality.

 

Following that process, individuals anticipate what a human would or should do next when interacting with a near-human agent presumed to be human based on initial sensory cues. When that agent, such as an android, violates the sensory prediction, e.g., due to imperfect movements or speech, the sensory error signal spikes, and the sentiment of individuals turns abruptly negative (see research here).

 

The best illustration I can think of is the following: Someone is being offered a glass of the sublime 1970 Château Haut-Brion. However, the wine has been visually altered to look like water, without the taster's knowledge. Notwithstanding the quality of the wine, most tasters would recoil at the first sip because the experienced taste would dramatically diverge from the anticipated taste.

 

When a high-confidence assumption (it is a human, it is water) collides with disconfirming evidence (it is a robot, it is wine), the brain biologically produces an ‘off’ reaction because it has been wrong-footed.

 

An analogy can be drawn with the psychological process underlying customer satisfaction, as underpinned by the expectancy-disconfirmation theory. It focuses on the simple idea that satisfaction is determined by the degree to which a product's performance, as perceived by a consumer, fulfills its expectations. A customer who has been wrong-footed is unlikely to be a happy customer.

 

Communication can also encounter an uncanny valley. A management team is expected to function in a predictable (human) way. When delivering bad news on an earnings call, investors would anticipate somberness. If the news is instead presented with incongruent positivity, executives would violate these predictions and wrong-foot investors, which can trigger a UV-like reflex and poor share price reaction.

 

There is also a case of ‘reverse’ UV. When humans ‘surface act’, e.g., as robotically smiling at a customer when they do not feel like it, they behave in an uncanny way and trigger feelings of distrust.

 

Whether for an android or a human, ensuring coherence between appearance and reality is the foundation upon which trust is built. It is not just psychological. It is also instantaneously biological.

Comments


Subscribe Form

©2019 by Le Banquier Déchaîné

bottom of page