Last year German performance artist, Simon Weckert, has tricked Google Maps with 99 smartphones. British artist, James Bridle, trapped a self-driving car so that he only painted a couple of lines on the road, while the art project called ImageNet Roulette by Kate Crawford and Trevor Paglen uncovers the harmful effects of facial recognition algorithms categorizing humans. All three initiatives explore how art can help designate the boundaries as well as the weaknesses of smart algorithms, thus how it can keep the narrative around the omnipotence of technology at bay, and show how it can correct its inherent prejudices and social bias.

How to cause a virtual traffic jam?

Algorithms aiming to reach narrow artificial intelligence (ANI) work in an excellent manner if the task is to categorize or cluster huge amounts of data according to various viewpoints. Their “knowledge” is rather limited, though, when they face real-life conditions – and this was not only uncovered successfully by IT specialists working on such software, as well as insider hackers who want to improve the algorithms’ efficiency through exploring their flaws, but also by numerous artistic projects.

The algorithm of the Google Maps app designates the location of traffic jams based on the GPS data in users’ phones. For example, if 30 people sit in a bus and Google detects that the phones of these 30 people change their location very very slowly, the application indicates with red for every other user, too, that there’s a traffic jam on this part of the road. In the possession of this knowledge, last year Simon Weckert showed how easy it was to trick the app: he slowly walked through some, visibly empty streets of Berlin with 99 smartphones, and as a reaction, Google Maps immediately indicated the virtual traffic jam.

How to trap self-driving cars?

In another case, James Bridle living in Athens tried to “outflank” self-driving cars in the framework of an art project. According to his idea, the software controlling these devices learn the highway code first: the meaning of signs painted on the road and on the roadsides. For example, if a driver sees a continuous and a dashed line, if the dashed line is further away from them, they can’t cross the continuous one – but crossing is allowed in the opposite case. So suppose that a self-driving car meets a circle painted on the road with a dashed and a continuous line, where the outer line is the dashed one.

In this case, the software of the vehicle senses that it can cross the line and go into the circle, but here, it gets trapped as the inner line is continuous – which it cannot cross, so it cannot leave the circle. The car will not move out of the circle – except a human being arrives for giving a helping hand. Similar “tricks” were already used by IT specialists and hackers to show the weaknesses of self-driving cars: for example, the camera system of several Tesla cars was successfully manipulated by tiny, simple stickers in February 2020 to cause the cars to speed up dangerously.

Trombone player, prophet or shepherd guiding wild animals?

Some months ago, social media surfaces were inundated by selfies annotated with short captures. Some photos were paired with labels such as “person” or “face”, others had more sophisticated designations, they were labeled as “economist” or “doctor”. Such annotations as “prophet”, “non-smoker”, or “trombone player” was also not rare. The common denominator of these image descriptions was that people seen on the pictures were reduced to one word, which often culminated in sexist or racist stereotypes.

This was the project called ImageNet Roulette launched by researcher Kate Crawford and artist Trevor Paglen, and it aimed at drawing attention to the harmful effects caused potentially by facial recognition and other algorithms categorizing people. As a reaction, by last September, one of the biggest visual database in the world, ImageNet, which was created by researchers at Stanford and Princeton Universities, removed around 600,000 images and reviewed 438 categories identifying them as “offensive” independent from any context. Removal of the images happened in the framework of a review of whether the algorithm of ImageNet shows any bias. During the examination, it turned out that the software reproduced those power relationships of gender and race, which were hidden in the data – moreover, it made them visible and exaggerated them.

How does a chatbot tell a family story?

This is how the development of artificial intelligence becomes the latest battlefield for social justice, and that makes the control of AI, the exploration of its impact on marginalized social groups, as well as the alleviation of its negative consequences essential. Stephany Dinkins, a transmedia artist thrives to do exactly this with her own means. She believes that the future of AI can be shaped if the right questions are raised. Her project entitled Not The Only One (N’TOO) embraces a multi-generational memoir of an African American family told by a smart algorithm.

Dinkins fed the algorithm stories told by the female members of her family, and the chatbot developed by her is constantly supplied by a culturally sensitive set of data. N’TOO’s chatbot answers all incoming questions in first person singular based on these pieces of information, and over time, it increases its vocabulary and narrative abilities with the input of users and their participation. The artist aims to show how storytelling, technology, and art intertwine and how all this helps rewrite the narrative around artificial intelligence.

AI art
Photo by Priscilla Du Preez on Unsplash

The illusion of the neutral, impersonal, and impartial AI

The above-mentioned artistic projects identify smart algorithms as new mechanisms for social discrimination, and they highlight what role art could undertake in our hypertechnologized world. While countless media channels propagate that AI-powered software can perform better than humans, no matter whether its medicine or education, they “work” almost without errors, they increase efficiency and decrease costs, this techno-optimistic rhetoric tends to leave out of sight that algorithms are created by humans with the help of data telling human stories and collected by humans, thus every single touch point of the creative and operational process of algorithms is full of social preconceptions as well as ingrained prejudices and bias.

Let me show you how that works through the simplest example of how machine learning operates. Let’s take facial recognition algorithms as they are the most common type of ANI and they are the easiest to explain. If developers want the software, for example, to recognize cats, researchers will show the program several thousands of cat pictures, which were labeled as “cat” before. The categorization (or professionally speaking the annotation) of images will be carried out manually by humans, so if these labels contain any social prejudice or bias, the algorithm will automatically learn it.

The project of Crawford and Paglen also draws attention to this, as well as to the fact that pictures later fed to the algorithm for learning are also selected by humans. For example, if the software learns that cats are hairy animals as humans only selected hairy cats for it to learn about, it won’t find Egyptian sphynx cats “cat” enough to label it as such. And of course, this could easily happen to other subjects as well, not only cats. Thus, the concept of the neutral, impersonal, and impartial AI is only an illusion.

This new form of criticism coming from artists both utilizes and targets machine learning and artificial intelligence.  Their work reveals how, despite the progressive rhetoric, data-driven processes are dominated in great part due to young, white males, whose fingerprints remain on the software they create. As a consequence, artists not only contribute to the dialogue that aims to uncover the nature of artificial intelligence more accurately – such data-driven clustering and categorizing systems created by humans, which mirror and reproduce social inequalities, but they also leverage on technology to show the way forward to a more righteous future where everyone can have a share from the advantages of the digital transformation.