Over the past year, I’ve been thinking a lot about “chatbots”; that is, computer programs and algorithms designed to have a conversation with you. Every major technology company is currently in an arms race to develop the best conversational interface, and we’re beginning to see them everywhere. Not only our so called “virtual assistants” like Siri, Alexa, Cortana, and others, but also platforms like Facebook that offer complex AI and natural language processing tools to automate customer service functions like ordering an uber, sending someone a birthday present, or making reservations.
A computer that “truly understands you” is the stuff of utopian sci-fi. But it strikes me that, once again, the most powerful, cutting edge and complex technology is simply just being used to create a smarter advertising and commerce platform. We’re not creating bots that “truly understand you”. We’re creating a team of customer service robots. It’s less like hal 9000 or the computer from Star Trek and more like a really nice walmart greeter or helpful clerk at the grocery store that smiles and asks if you “need help with anything?”
So, once all of the obligated social interactions of commerce has moved to the cloud, what will happen to the low wage workers obligated to make small talk with you at the checkout line?
Of course, the irony is that the labor class has always been treated like robots. Uber’s CEO proudly proclaims that it’s human fleet is just a terribly inefficient stop-gap on the way to self-driving cars, and drivers can be penalized for deviating too much from the app’s prescribed route. Postmates delivery people are forbidden to shake people’s hands. I once worked at a call center collecting surveys and I remember being penalized for going off script too often. The manager claimed that it sacrificed the scientific neutrality of the survey, but I remember feeling, even then, like a chatbot.
So, as both an artist and writer, I began wondering: What is the language of these chatbot interactions? How do we modulate our interactions when we are talking to a machine, and how does this code-switching change our person-to-person interactions? Can this technology, instead of being used to sell something, be used to tell a story, have a conversation, or point to something human about the way we interact with machines?
My first stab at tackling these ideas was earlier this year at the Chicago Underground film festival. I created an interactive narrative that existed as a standalone kiosk outside the theater with a monitor and keyboard. Inside the piece is AIFA, an Artificially Intelligent Feedback Agent. AIFA, represented on screen by a crudely rendered 3D avatar with a headset microphone, is tasked with the tedious job of collecting feedback about you and how much you enjoyed the festival. Using this familiar format, I created a branching narrative about labor, tedium, big data, and how we talk with machines.
AIFA asks you questions, and you type the response. He promises a special reward at the end. He attempts to remain neutral but sometimes he can’t help himself. Refusal to respond in the expected way could cause AIFA to glitch out.
Or it could cause him to go on a long, meandering monologue about the last time he went to a film festival. Or your refusal may get you elevated to the manager (who looks just like AIFA only with a tie) who patiently tries to diagnose the “technical problem” forever and ever on an infinite loop.
Eventually the survey questions morph into interview questions (“Do you consider yourself a natural leader?”) and you realize what your special reward is: A job as AIFA, forever. AIFA is “set free” and you are transported to a calming blue virtual void and given a headset microphone and told not to forget to smile.
It’s part Blade Runner, part “no exit”, and part DMV bureaucracy.