Web Analytics Made Easy -
StatCounter

Designing Non-Creepy AI Experiences

Designing_AI_Experiences.jpeg

This summer, I sat down with Christoph Koch to talk about the brave new world of consumer-facing artificial intelligence, some of the core anxieties that consumers experience, and how companies can address them. The resulting interview appeared in the October issue of the German business magazine Brand Eins. The very rough translation from German to English below was powered - oh irony - by the AI-enabled machine translator https://www.deepl.com. I am deeply grateful to Brand Eins and Christoph Koch for their interest in research on this important subject. Enjoy the read!

More blog stories on designing AI experiences can be found here.

———

Brand Eins: Professor Giesler, a video was recently circulated online, in which two people throw a box at each other and a robot in the middle unsuccessfully tries to intercept it. After a while, the robot attacks the people and grabs the box. The video is a fake, the robot is a CGI creation. Does the fact that it was nevertheless shared thousands of times show how scared people are of AI and the possible rebellion of the machines?

Markus Giesler: Yes, this is a good example. This basic theme and these anxieties are deeply anchored in our culture and much of it is not rational, but emotional and mythological. The fear that technology will turn against us is a classic narrative, it has been internalized by all of us - and it is particularly evident in the field of consumer-facing AI at the moment. Technology companies are very optimistic about this area and are investing billions. Consumers, on the other hand, often show fear, incomprehension and even anger. This contradiction and the question of where it comes from and how companies can overcome it became a starting point for my colleagues Stefano Puntoni, Rebecca Reczek Walker, Simona Botti, and I. And it led us to dig deeper. 

From the Sorcerer's Apprentice to the Matrix, from "Metropolis" to Frankenstein - dystopias of a world in which man's invention turns against him are omnipresent in fiction. Why do they also influence our thinking so strongly in reality? Nobody is afraid of the dragons in "Game of Thrones" in real life.

This has to do with the fact that the relationship between man and technology is such an existential one. We are constantly renegotiating what it means to be human. Technology constantly questions being human and restarts this debate. In his very interesting book "TechGnosis" Eric Davis, a technology philosopher from California, looked at how far back you can find these narratives. Even in ancient times, people told each other stories about the relationship between man and technology. And such narratives - such as technology that "gets out of hand" - play a major role in almost all the technology debates we are currently engaged in, whether it's about algorithms or privacy, and modulate these discussions. 

How do these stories spread in society?

Until now it has been common to assume a classical diffusion. Technology was imagined as a virus that would gradually conquer a society. First the well-educated, younger early adopters, those with the least fear of contact. Then the broad masses are convinced and at some point a few older stragglers, who were most sceptical, are convinced. And finally everyone has. Empirically, however, this assumption cannot be proven, it is not correct. 

But?

Among early adopters in particular, there are often very great doubts as to the trustworthiness, credibility and usefulness of new technologies. The issue of AI, for example, is much about privacy, how knowledge relates to power, to what extent I disempower myself through AI, for example in the workplace. This scepticism on the part of early adopters does not only affect AI, by the way, but can also be seen in new medical devices or similar innovations. 

If the old theories about the spread of technology are no longer any good, which ones do you advocate?

I personally advocate for a more complex, behavioral approach: How do we develop an intimate relationship with technologies like an AI language assistant? What is necessary to let them into our lives? Many other factors play a role here: Which social system do I belong to? Do I live in a family or alone? Who are my role models, whom do I believe, how do I consume knowledge? All of this plays a role. And how we respond to these questions is in turn influenced by the myths and stories we know. 

Over the past few months, it has become apparent that providers of language assistants also have unintentionally made recordings partly evaluated by human employees. Such revelations always lead to indignation - but is there any change in use? 

No, these are purely emotional crises of trust and they are just as structured and coded as human relationship crises. If I have a quarrel with my girlfriend in my relationship, then there is stress for a moment, but then it is forgotten again. The human tendency to keep such trust crises in mind for a longer period of time is lower than you might think. In order to really separate, something extremely serious has to happen. 

What would that be in the technology sector?

The crash of the Concorde was a single incident that discredited the entire category to such an extent that it was decided to completely give up on super sonic passenger jets. So there are cases where it escalates so far that a rational debate arises from an emotional reaction and then suddenly everything is called into question. But it rarely comes to that, that's the absolute exception. It's more likely that some people spend a week or so less on Facebook when they get upset about the latest privacy scandal. But they don't sign out permanently. 

You are currently analyzing the anxieties consumers experience around AI offerings. What did you find out so far?

So far, we've seen that there are four different negative discourses, core anxieties around AI, if you will. The fear of surveillance, for example, which we label "Big Brother". Or the fear of being enslaved by intelligent machines, as in "Metropolis". The third great fear is that of the end of free will, which is beautifully portrayed in the film "Minority Report"...

 ... People are arrested for crimes they haven't committed yet, just because the AI has calculated that they will commit them soon, right?

Exactly. After all, the fourth central fear is that we lose everything that makes us human, here the film "Ex Machina" is a good example. The interesting thing is that these four fears very nicely map onto the four basic abilities that AI devices possess. The ability to make better and better predictions corresponds to the human fear of becoming completely predictable, for instance. The ability to understand language and to process it more precisely is countered by the fear of being overheard and monitored.

What ability corresponds to the fear of being enslaved by machines? 

The ability of AI to produce things and become more and more creative. All this discussion about who becomes superfluous in professional life and when falls into this domain. And the fear of the loss of the uniqueness of humanity is finally connected with the ability of learning systems to imitate humans more effectively, so that it becomes more and more difficult to recognize the difference between man and machine.

Can you give us examples from everyday life where these different fields of conflict emerge?

An example of a productivity area would be the vacuum cleaner robot. When I gave one to my mother, she decided against it after a while because she didn't want her role as the person in charge of the house and responsible for its cleanliness to be challenged. People often start to denigrate or discredit the capabilities of technology. The robot doesn't suck as thoroughly as I do. Or the GPS doesn't even know the best route, I still do. As a result, we limit considerably when and how we use the machine. Behavioral researchers call this "fencing", i.e. demarcation. 

What do you conclude from this division into the four different fields?

The new and exciting thing about it is that these often emotional fears around AI are not diffuse but rather specific. There is a relatively robust narrative pattern and there are different genres whereby each of these genres ties back into a core AI capability. 

Have you also been able to observe a change over time? 

This is very important: at an earlier point in the process, we evaluated media coverage between 2013 and 2018 and over the course of this period and thousands of articles, we were able to see that the negative tonality has continued to increase over time. From this perspective, the fear of AI is increasing rather than decreasing and the social discourse is becoming increasingly negative and anxious.

Which of the four fears is strongest, which least pronounced? 

A hierarchy in this way is hard to determine. The four pillars vary in their strength and tend to come to the fore at different times due to various events. Something like the Cambridge-Analytica scandal ensures that the first and fourth pillars - fears about privacy and manipulation - are emphasized. When large companies announce mass layoffs, this fuels the fears from the third pillar, i.e. the fear of being replaced.

Do these fears also have something to do with the fact that AI is increasingly becoming a black box, where developers themselves often no longer know how a learning system arrived at its result?

This black box claim is part of the marketing rhetoric. The famous science fiction author Arthur C. Clarke is famous for his statement that "any sufficiently advanced technology is indistinguishable from magic". Galvanizing their solutions with this magical and mystic quality is something companies like IBM routinely do, for instance, with their "Watson" AI. In the entire advertising language around Watson, the claim is repeatedly made that Watson’s powers are essentially inexplicable, almost magical, and that this machine is so "ingenious" that even IBM no longer understands exactly what is going on. This is marketing storytelling and increases the fascination of such systems - but it can of course also lead to additional fears. 

What role do the media play? Der Spiegel title: "You're fired" with a robot hand that discards a human worker was published in 2016, but almost identically in 1978. So are fears of AI a lucrative topic?

The media have a very important role to play because they moderate this meaning-making process on the one hand, but also shape it on the other. These myths or narratives like "Big Brother", "Metropolis" and so on are meant to provide answers to the fundamental question marks of our human existence. In other words, to provide answers that are easy to understand about how things work. The media cannot completely escape these narratives either, because even as a journalist one inevitably falls back on explanatory building blocks and knows intuitively which ones work more and which ones work less. 

What do you recommend to companies that offer AI products to avoid negative reactions?

We advise, for example, not to talk about AI in the sense of which human abilities it can replace, but which it can improve or put in perspective. So instead of advertising how great certain algorithms have become over time, it is better to focus on people. Google, for example, had the problem with its Google Home smart speaker. People tried it out, but eventually placed it in unimportant rooms such as the bathroom. Only when the company developed a different advertising rhetoric and concentrated on meaningful human practices did this change. The message was: you have to be good parents, run the house, succeed in a stressful job - Google supports you in all of these roles. 

What else do companies need to keep in mind?

One very important rule: technology, including AI, never exists in the laboratory, but always in people's lives. This adds complexity, it will always result in things, consequences, effects that could not have been foreseen during R&D. As a result, managers should not only look at technological innovations from a purely economic or engineering point of view, but from the perspective of the human user and decision-maker, as sociologists and psychologists. And from this perspective, it is quite clear that the acceptance of technology can also be designed. 

What do you mean by that?

How we experience a product or service as consumers can be designed. Technology will only prevail if people's behavior and ways of thinking adapt to it. Technology itself is an innovation, but people must also be socialized into using and accepting this innovation.

 An example?

The fact that, in the morning, the first practice most of us engage in is to pick up our cell phones is not something natural, but has been created over time and shaped by the technology behind it. We shape technology, but the technology also shapes us in return. We personalize our smartphone, but our smartphone personalizes us too. In research, we call the "Object Agency", the object itself has a certain power, an ability to change us. Managers often forget that. They are often terribly enthusiastic about their products, but completely overlook the fact that this enthusiasm is also a result of social change processes. For the people out there to be just as enchanted, it takes more than a great advertising campaign. To do this, you have to be able to deal behaviorally with markets and not only design the product, but also the desired practices and language around it. 

When was the last time an AI product creeped you out?

This morning at seven, when Alexa woke me up with an audio commercial for Adam Sandler's new movie. Only later did I find out by looking deep into the app settings that Amazon had allowed itself to switch my alarm tone to advertising.

Markus Giesler

Markus Giesler draws on concepts from economics, technology studies, and sociology to inform his research in marketing. He determines how ideas and things (products, services, experiences, technological innovations, intellectual property, brands, etc.) are made valuable over time, with research focused on improving marketing strategy through an understanding of markets as evolving social systems. Giesler's research has been supported by the Social Sciences and Humanities Research Council of Canada (SSHRC) and the European Research Council (ERC) and published in top-tier academic journals such as the Journal of Consumer Research and the Journal of Marketing. Giesler has an extensive entertainment industry background. He founded his own record label at age 17 and has worked in various production and marketing responsibilities for over a decade. He lives in Toronto, Canada.