Refik Anadol Trains AI to Dream of New York City

Art in America

 

By Sophie Haigney

NEW YORK — Watching Refik Anadol’s Machine Hallucination (2019) is a dizzying experience, like taking a ride at a carnival. Created with algorithms that found and processed hundreds of millions of images of New York City, Machine Hallucination is a buzzing, immersive audiovisual piece, on view at Artechouse, a new digital art space in New York’s Chelsea Market, through November 17. The gallery is a 6,000-square foot converted boiler room, sporting the kind of scrubbed ex-industrial chic—exposed brick, soaring ceilings—that has become a cliché in New York. With Anadol’s piece projected on its walls and floor, the space conjures both nightclubs and nightmares.

Machine Hallucination whirs and throbs for thirty minutes, opening what Anadol said in an interview is a window into the “mind of a machine” as it processes images and then responds to them. There are moments when glimmers of New York are completely clear: you feel as if you’re moving over the city’s grid at a great height, or glimpse images of buildings right before they start to morph beyond recognition. Other times, you are looking at the data architecture, at graphic plottings, or metadata tags of the original photos—keywords like COLOR and URBAN and NEWYORK.

Anadol, a media artist originally from Istanbul who now lives in Los Angeles, has been working with data for a decade to make large-scale installations of sound and light—often visualizing open source data about cities—displayed in public space. In 2016, he was a resident at Google’s Artists and Machine Intelligence Program, where he learned how to use artificial intelligence as an artistic tool. He has previously created installations for Artechouse’s locations in Washington, D.C. and Miami, and was invited to inaugurate its New York venue, which opened September 6.

Anadol made Machine Hallucination with the aid of twelve studio assistants. “Data is my medium, and as a team we’ve been working with data and algorithms and trying to explore this hidden emotional experience inside this invisible world of data,” he said. His goal in this work was to turn machine learning into a narrative of sorts: to make visible the actual process of an algorithm taking in and responding to images. Anadol used several algorithms for this project. The main one, called StyleGAN, was developed by researchers at NVIDIA, a tech company that designs high-end graphics processing units (used, among other things, for video games and self-driving cars). Anadol and his studio used the neural network and various modifications to process a gargantuan dataset of publicly available images of New York City: 300 million photos, and 113 million other raw data points….


Continued…