The Third AI (2019) is an exploration of a machine’s interpretation of the TV series Killing Eve (AMC Networks); looking into the potential of machine-learning tools to discover new ideas and ways in film-making.
Human creativity is often the result of making random connections from unexpected sources of inspiration. These machine generated outputs however, all relate to a finite, defined dataset. The intention of this project is not to suggest at automating creativity, or as a replacement for humans — rather, to investigate machine generated space as a source of inspiration, and a method of exploration, for creating narratives. A machine offers a fundamentally different way of seeing — where everything is represented numerically. Can we use this way of seeing as a tool in our creative process?
I used StyleGAN to generate visuals, WaveNet to generate music based on the show’s soundtrack, and GPT-2 to generate dialogue based on the subtitles. The results are a curated amalgamation of these outputs; artificial vignettes created by exploring numeric machine generated space.
It is also worth mentioning the time taken to train — a total of 7w 6d 22h 10m 8s — which includes failed experiments, re-runs with small modifications etc. It is a resource hungry endeavor, limited in the scope of its access right now. It was made possible for me thanks to NYU’s High Performance Computing.
This project was made possible by AMC Networks, NYC Media Lab, and NYU ITP, as part of a 12-week project exploring the future of storytelling and synthetic media.