4104 frames (2020) is an experimental video exploring nature, architecture and life through the lens of a machine learning algorithm trained only to understand faces. It is an attempt at revealing the inner workings of a traditionally ‘black-box’ machine learning model. This results in scenes of human faces deteriorating into environments as part of a delirious exploration of the farthest edges of a model’s latent space. A demonstration of a novel use case of generative adversarial networks (GANs) for video manipulation.
Video frames are fed through an inverted generative adversarial network (StyleGAN2) pre-trained on a dataset of faces, outputting some latent representations of the input frames. These latent representations are then re-passed through the GAN, resulting in heavily manipulated versions of the original frames containing elements from both the original footage as well as facial features and textures reminding the viewer of the model’s intended purpose. This identity operation can be distorted by adjusting the loss function and reference images used in mapping images to the latent space, which directly affects the level of ‘obscurity’ present in the video. This results in increased creative freedom and transforms a ‘black-box’ generative model into a creative video manipulation tool.