Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Do Alternate Realities Exist? This Artist's Machines Are Ready to Find Out

Visual artist Refik Anadol is working with AI to see if machines can visualize alternate realities. He spoke with us ahead of a presentation at next month's Nvidia GTC.

March 25, 2021
(Machine Hallucination: New York)


Authors and filmmakers have long speculated on the existence of alternate realities, but L.A.-based visual artist Refik Anadol is working with an artificial intelligence to see if machines can do the same—via spectacular art installations.

The AI in question is tapping into quantum mechanics. "We in our daily lives are not able to see alternative dimensions, but in quantum mechanics [and] quantum computation, there is still a theory of many worlds. And in [the] subatomic world of quantum mechanics, you can see things in superposition, and we are speculating in this project that, perhaps, if AI can look at this complexity...it can see an alternative reality.

"So simply, we are watching an AI dreaming," Anadol says.

Anadol’s projects have appeared all over the world (Walt Disney Concert Hall, Centre Pompidou, Daejeon Museum of Art), and he served as an artist-in-residence for Google’s Artists and Machine Intelligence program. Is this the future of art and architecture, and does Anadol’s AI know something about other worlds that we don’t? We talked to him to find out.


How did you first deploy Generative Adversarial Networks (GAN) in order to teach your AIs to 'dream'?
[RA]
In 2016, I was an artist-in-residence at Google AMI (Artists and Machine Intelligence), which is where my team and I learned how to use AI algorithms for a project called Archive Dreaming—a purposeful speculation about the future of libraries. This was the first time I was able to work with a Deep Convolutional Generative Adversarial Network, thanks to [artist, researcher, and Google engineer] Dr. Mike Tyka, who became my mentor, and a true supporter of my very first AI journey. Since then, I’ve never stopped using GANs.

What was your source material for the AI to learn from on that project? What did you ask the AI to 'dream'?
[RA]
We were fortunate to have access to 1.7 million documents from a publicly open cultural archive, and we used this to create an installation, which, as far as I know, is the first of its kind in the world that truly used an AI in this way, to speculate an architectural future of a library. We asked ourselves: "Can a building dream? Can it hallucinate its own future?" A library in the near-future, that can learn its own content, and its own information then turns into knowledge, and eventually wisdom, then a dream—this was the concept behind the project. 

Refik Anadol headshot
Refik Anadol (Image Credit: Efsun Erkılıç)

When did you first become aware of AIs?
[RA]
I was 8 years old when I saw Blade Runner, and, I clearly remember my cousin was saying: "These are not human. These are two androids and one is criticizing that the other’s memories are not real." I was totally inspired by this moment, thinking about what a machine can do with someone else’s memories. In the same year, I got my first computer, and even though my computer was not an AI, I always remembered that there was a space inside it that was the mind of a machine. Then, of course I eventually read Philip K. Dick, William Gibson, and many others, and they all opened up my mind from a science-fiction perspective. 

You’ve partnered extensively with Nvidia to use its StyleGAN algorithm. What will you be exhibiting at Nvidia GTC in April?
[RA]
Firstly, I’m deeply appreciative of the support that Nvidia gave me during my journey, not only for this particular project. We’ve done many pioneering collaborations in the field of computer graphics/AI, I wouldn’t be where I am without this specific support. At Nvidia GTC, we will be unveiling an exciting new project, inspired by the combination of AI and neuroscience.

We are exploring the world’s largest neuroscientific data set from the Human Connectome Project, in collaboration with UCLA's Dr. Taylor Kuhn, and with incredible support from Siemens who are behind the sensors recording all of the participants’ magnetic resonance imaging (MRI), electroencephalogram (EEG), and diffusion tensor imaging (DTI) data. We will be generating machine hallucinations from this enormous amount of information. It will be the world’s first iteration of letting AI speculate the architecture of the human mind and its unseen connections, in the form of 3D-printed sculpture.

That sounds incredible. So you’re taking your AI inside the human mind, in the same way you let your AI 'dream' about the tech behind Machine Hallucination: New York project, where you 'fed' the StyleGAN with a 200 million-plus image dataset of NYC?
[RA]
Yes. And, as far as I know, that was the largest GAN ever trained on a specific concept such as the city of New York, which enabled the audience to use an interactive browser to virtually fly around the latent space and record their own journeys. It was truly inspiring seeing the AI reconstructing the city in any season, any time of the day, just fascinating. As an artist, the AI was a perfect team member, a 'thinking brush,' delivering these moments to me, giving me forms and visuals and colors that I could never dream of on my own.

Talk us through the tech behind this.
[RA]
For Machine Hallucination: New York, we used StyleGAN2, Nvidia DGX Station, 500 TFLOPS of AI power, and the world’s fastest workstation for leading-edge AI research and development. StyleGAN2 generated a model for the machine to process the archive, and the model was trained on subsets of the sorted images, creating embeddings in 4,096 dimensions. To understand this complex spatial structure visually, we utilized dimensional-reduction algorithms, such as cuml-UMAP, projecting to a navigable 3-dimensional universe. 

Working alongside an AI must be a compellingand very differentexperience to a human collaborator. 
[RA]
I’ve been creating data universes since 2016, and the reason I enjoy working with AI is that I’m heavily inspired by latent space—n-dimensional space—a mathematical space, and transforming this n-dimensional space into a space that we can perceive, we can fly in, or even step inside a specific story. AI data sculptures and AI data paintings come from latent space, and the core concepts of our work explore the ideas and narratives around this. Thanks to these algorithms and the computational power, we can constantly research, develop, understand, and repeat the same process over and over until we get the perfect result that feels artistically compelling.

More recently, you've been getting the AI to ingest multiple instances of quantum mechanics theory in a mission to explore the nature of possible new worlds. How did this come about?
[RA]
I've been deeply interested in quantum mechanics for a while, and then saw the Alex Garland show Devs, which inspired me to consider Hugh Everett’s Many-Worlds Interpretation. Thanks to the Google AI Quantum team, we were able to examine the patterns of a quantum supremacy data set and try to navigate/connect the gap and ask: "If we are living in a world where machines are needed to understand many things, why not also use AI to navigate alternative dimensions?" To achieve this we spent a significant amount of time and ultimately were able to modify StyleGAN to the adaptive discriminator augmentation [ADA] algorithm and feed it noise distribution generated by the quantum supremacy data.

Quantum Memories, the output for this collaboration, was displayed at the National Gallery of Victoria, in Melbourne. Explain what it entailed.
[RA]
Quantum Memories utilizes Google AI’s most cutting-edge, publicly available quantum computation research data and algorithms to explore the possibility of a parallel world. These algorithms allow us to speculate alternative modalities inside the most sophisticated computer available, and create new quantum noise-generated datasets as building blocks of these modalities. The 3D visual piece is accompanied by an audio experience that is also based on quantum noise-generated data, offering an immersive experience that further challenges the notion of mutual exclusivity. It was an amazing journey to tap into the random fluctuations of quantum noise as a unique realm of possibilities and predictions. 

Some background on you. You were born in Istanbul, Turkey. What brought you to the US, and to Los Angeles, specifically?
[RA]
L.A. is the place of Blade Runner from my childhood. Later on, I was inspired by technology in my work, and knew that Los Angeles is the home of very creative minds. University of California, Los Angeles (UCLA), where I got my degree, is full of pioneers, so I was very fortunate to be able to train with the pioneers in my field. Being in L.A. also means being close to the creative community and pretty close to the tech giants of Silicon Valley. So, I find it a very fruitful space where art, science, and technology can naturally combine. It’s the home of cinema, the home of entertainment. I find it a city that can hold many dreams in one location.

Talking of L.A., I first came across your work, not in a gallery, or museum, but in the Beverly Center Mall and was enthralled by the viscous textures seemingly spilling out of the frame.
[RA]
The Beverly Center curatorial team specifically asked for a site-specific piece, and so I was able to generate a whole new concept about the future of fashion by using GANs to imagine ever-changing patterns and forms and structures that cannot be done in the physical world, pushing the boundaries and the imagination, transforming an existing artificial space into an extremely unconventional way of looking at fabric from generative algorithms.

You've referred to your public art as 'post-digital architecture.' Do you consider your work as part of futuristic responsive and/or sentient environments? 
[RA]
This is a speculation that’s been going on in my work since Archive Dreaming. When I augment a library or Frank Gehry’s Walt Disney Concert Hall, home of the Los Angeles Philharmonic, I have the same intention. I do believe that near future architecture is beyond glass, steel, or concrete, and I do believe that machines will emerge with spaces. But the big questions are: What will they remember? What will they learn? and What will they dream?

As we’re currently under the occupation of COVID-19, can you see a role for your in situ work in keeping us all somewhat sane?
[RA]
Yes, this really inspires me. The pure AI, neuroscience, and architecture speculation that we have been doing for the last five years has been leading us here, especially during COVID. I would be extremely delighted if the room I am living in every single day has an emotional sense, and could give me an intelligent response, when the world around us is collapsing. Eventually, this will happen. The spaces themselves will become creative. 

Finally, if this isn’t too out there, do you think there’s a way to map our own individual subconsciousness, or a collective consciousness, merged with multiple AIs, through your work?
[RA]
Incredible question. First of all, as humans, we still don’t know how our consciousness works. It’s still a big debate. [University of Oxford] Professor Sir Roger Penrose is thinking about consciousness in the form of quantum physics, while others, such as [University of Sussex] Professor Anil Seth, think reality is a controlled hallucination. I think if we can ever really understand what consciousness is, it will allow us to go beyond what we can do at this moment of humanity. 

If it were possible, where would you start in gathering that dataset? And what would you see as the final piece's purpose in existence?
[RA]
The data set will most likely come from an AI in a neuroscience project that will be completely engaged also with the arts because if you talk about consciousness I think imagination has to be in the game. In fact, for consciousness, every single discipline in the world has to converge. To understand consciousness, we have to understand everything and AI is the only way to achieve it, that’s for sure. But, before that, we have to understand what consciousness is, and that may be one of the most exciting challenges of AI’s journey in the next decade. 

Refik Anadol will be speaking at Nvidia GTC on April 12, 2021. 

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About S.C. Stuart

Contributing Writer

S.C. Stuart

S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; US Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life encounters with over 27 robots and counting).

Read S.C.'s full bio

Read the latest from S.C. Stuart