Deep Meditations (2018)
A meditation on life, nature, the universe and our subjective experience of it. A deep dive into – and controlled exploration of – the inner world of an artificial neural network trained on everything; the world, the universe; on art, life, love, faith, ritual, god.
‘Deep Meditations’ is a series of works in different formats – including a 1 hour film presented as an immersive, meditative, multi-channel video and sound installation – wavering on the borders of abstract and representational, photo-real and painterly. It is a continuation and merger of both the ‘Learning to see‘ and ‘Learning to Listen‘ series of works; using state-of-the-art Machine Learning algorithms as a means of reflecting on ourselves and how we make meaning. It is intended as both a piece for introspection and self-reflection, as a mirror to ourselves, our own mind and how we make sense of the world; but also as a window into the mind of the machine, as it tries to make make sense of its observations in its own computational way. But there is no boundary between the mirror and the window, it’s impossible to separate the two, for the very act of looking through this window is projecting ourselves through it.
The piece is a slow journey through the imagination of a machine which has been trained on everything – literally everything. Images tagged ‘everything‘ were scraped from the popular photo-sharing website Flickr. Along with images tagged with world, universe, space, mountains, oceans, flowers etc., as well as more abstract, subjective concepts like art, life, love, faith, ritual, god. What do these labels mean? What do they look like? Do they have a universal, objective aesthetic? Most likely not, but the network is learning what the Internet – at least a small corner of the Internet – thinks they represent.
Using custom techniques and tools, very precise journeys are meticulously crafted in the high dimensional learned internal space of the neural networks, to construct these very particular sequences of images and sounds – a controlled exploration of the inner world of the network. Given such a diverse dataset, the neural networks are not given any labels about anything that they are exposed to. They are not provided the semantic information to be able to distinguish between different categories of images or sounds, between small or large; microscopic or galactic; organic or human-made. Without any of this semantic context, the network analyses and learns purely on aesthetics. Swarms of bacteria merge with clouds of nebula; oceanic waves become mountains; flowers become sunsets; blood cells become technical illustrations.
And then as we look back upon the carefully constructed images bordering between abstract and representational, we project ourselves back onto them, we invent stories, we see things not as they are, but as we are.
http://deepmeditations.ai