Despite the vertiginous advances in cognitive technologies, commercial success will be determined by the willingness of humans to embrace them. Actually, the process of acceptance requires overcoming at some point a strong aversion by users. This dynamics need to be understood in business cases before rushing to make a decision.
The promise of intelligent technologies is fascinating: There is a wave of predictions presenting a future with robots taking over most human tasks, creating a much higher standard of living, where humans will concentrate on value-added tasks (there is a pessimistic current of thinking predicting we will end up as servants to robots, but we are not going to deal with that in this post). The question that is often overlooked is how to make the journey to that promising future.
Technological lifecycle follows a consistently upwards path, which can take the form of an S-curve, linear, or a disruptive leap. However, in the case of human and machine interactions, when assessing the likelihood of acceptance from the user side, the lifecycle path does not apply. The shape of the curve and the process dynamics are quite different. Understanding this difference could make the difference between the success and failure of a given project. This was known about 40 years ago, and the acceptance curve is called the uncanny valley. The concept was coined by the Japanese robotics researcher Masahiro Mori. It describes our response cycle when interacting with a robot. It is shown in the graph below:
Source: ACM, 2016
Initially, our response to the appearance of a robot is positive as the appearance of the robot becomes increasingly human. However at some point, as the robot appears closer to humans, but not quite, it triggers a strong revulsion. That is the uncanny valley. Only when the appearance of the robot becomes practically indistinguishable from that of a human being, the emotional response becomes positive again.
The point here is that the uncanny valley does not apply only to robotics, but to the technologies that represent an interaction between humans and intelligent machines. In this case, the key is to be able to identify the factors defining the human-machine experience, beyond physical appearance.
Different technologies are at different points in the uncanny valley. For instance, digital representations of humans crossed the valley years ago. Other technologies are still making their way up from the bottom of the uncanny valley. We are going to explore two of them, how the uncanny valley applies, and the main implications:
Customer-facing Artificial Intelligence: This is a clear example of the illusion of upward evolution when the uncanny valley is overlooked. The benefits that AI can bring are great: an always-on, self-service offering, which can create top quality experiences when combined with big data and analytics. This has been proposed as the solution to the chronic BPO challenges, as organizations struggle to find contact center staff combining excellence in their skills, job loyalty and competitive salaries.
Basic IVR systems provided a cost-effective solution to simple problems. With the evolution of artificial intelligence, the range of problems that can be solved is potentially much broader. However, when dealing with more sophisticated systems, customer experience can be summarized as frustrating. There is an overall impression that the machine does not understand customer needs, beyond a set of predefined scenarios. This, along with the need to repeat questions several times, leads to an urge to talk to a human being. In the case of chatbots or personal assistants, it just takes a few interactions to reach the limits of that intelligence. Children seem to be particularly talented for that. Both cases reflect they are in the low region in the uncanny valley.
These systems do not yet have the ability to recognize or assess their own limitations. Furthermore, there are unable to deal with uncertainty and ambiguity, which are present to a high degree in human communication. The level of interaction expected needs to be richer than just providing an answer.
Anthropomorphic robots: After creating outstanding value in several sectors (e.g. manufacturing), the robot industry is turning to fulfilling human and social needs, in areas such as elderly care, domestic services, or even children with special needs. In principle, robots can potentially make a substantial improvement in the quality of life of many people.
This case combines both physical appearance and artificial intelligence. One of the companies actively researching in this area is Hanson Robotics, aiming at bringing to the world “humanlike robots with greater-than-human wisdom.” The company believes that showing humanlike face expressiveness and language technology, they can build strong emotional connections, paving the way for the services mentioned above.
I invite the reader to check the link below, and assess where in the uncanny valley this robot is. In order to create an interaction resembling human experience, and be accepted by people (in particularly those with special needs) it seems obvious that they still have a long way to go.
There is an element missing in the cognitive world and artificial intelligence when it comes to complex interaction with humans: emotional intelligence. These machines lack a basic conscience. It seems too obvious the user is dealing with just an algorithm or engine, however sophisticated, and no empathy emerges from those interactions. There is a contrast between the enthusiasm of developers and the reaction of users (or viewers)
This raises a question: if we are willing to trust a robot, knowing that in the end it is a robot. On the other hand, if a given robot passes the Turing test (created to determine if a machine behavior is actually indistinguishable from that of a human), a new breed of problems may appear. A clear example is the computer generated Japanese pop idol Aimi Eguchi, who (which) was a clever digital composite of the features of six existing members of the idol group AKB48, able to fool and later shock millions of fans in 2011.
It is not clear if society will allow robots to enter areas considered genuinely human. We may end up determining limits, establishing which activities can be performed by robots, and which ones will remain in the human domain. This is particularly clear in the case of ethical dilemmas (see the link below).It will be necessary to create a multidisciplinary dialogue addressing these questions.
For the time being, the most immediate step for any organization developing these solutions would be assessing where they are in the uncanny valley, and what steps are necessary to cross it. It is not them who should make the assessment, but their customers, the ultimate decision makers.
Anthropomorphic robot: http://www.cnet.com/news/crazy-eyed-robot-wants-a-family-and-to-destroy-all-humans/
AI and ethical dilemmas: https://www.linkedin.com/pulse/conciencia-artificial-vs-inteligencia-antonio-j-ramirez (in Spanish)