The series, titled “Portraits of Imaginary People” explores the latent space of human faces by training a neural network to imagine and then depict portraits of people who don’t exist. To do so, many thousands of photographs of faces taken from Flickr are fed to a type of machine-learning program called a Generative Adversarial Network (GAN). GANs work by using two neural networks that play an adversarial game: one (the “Generator”) tries to generate increasingly convincing output, while a second (the “Discriminator”) tries to learn to distinguish real photos from the artificially generated ones. At first, both networks are poor at their respective tasks. But as the Discriminator network starts to learn to predict fake from real, it keeps the Generator on its toes, pushing it to generate harder and more convincing examples. In order to keep up, the Generator gets better and better, and the Discriminator correspondingly has to improve its response. With time, the images generated become increasingly realistic, as both adversaries try to outwit each other. The images you see here are thus a result of the rules and internal correlations the neural networks learned from the training images.
The work here was produced using a custom two-three stage GAN to go up to 4k x 4k pixels, suitable for printing at 2 ft by 2 ft. This is to date some of the largest resolution GAN out put achieved. The method is described below in a blog article and uses some innovative ideas using semantic guiding of the high-res staging to feed context to the GAN generator network.
The work has been exhibited at Ars Electronica 2017 in Linz, at Out Of Sight 07 in Seattle and is currently on display at the New Museum in Karuizawa, Japan.
Dreams of Imaginary People animation can be found here.