Ben Bogart

Featured-Image (1)

Watching (2001: A Space Odyssey) (2018) This work is informed by an epistemological position where subjects and objects are considered mutually constructive. The bounds and properties of objects are not separable from the subject actively involved in their construction. Stanley Kubrick’s “2001: A Space Odyssey” is ‘understood’ by a Subjective Machine through the construction of objects of perception (percepts). Percepts are an emergent result of the interaction between information in the world and the unsupervised machine learning algorithms that impose boundaries in that information. The generated images and sounds represent the machine’s subjective reconstruction of the source, reconstructed using percepts. The discontinuity over time reflects the machine’s alien subjectivity and cues structural films constructed frame by frame. This piece is situated in a larger body of work collectively titled “Watching and Dreaming”. Initiated in 2014, this series of works are the result of statistically oriented machine learning and computer vision algorithms attempting to understand popular cinematic depictions of Artificial Intelligence by deconstructing and reconstructing them. The machines’ understanding is manifest in their ability to recognize, and eventually predict, the structure of the films they watch. The images produced are the result of both the system’s projection of imaginary structure, and the structure of the films themselves.

Each frame in the source is broken into components using mean shift segmentation (a form of unsupervised machine learning that imposes boundaries containing pixels deemed similar). This results in 30 to 45 million image segments for each film. To allow the machine a sense of the persistence of objects over time, these segments are grouped by colour and rough shape similarity and averaged into hundreds of thousands of percepts that serve as a visual vocabulary of the work; these percepts are the machine’s cognition of objects recognized in the source film. The final image is constructed by rendering each extracted image segment as it’s nearest (recognized) percept. These resulting ‘mental’ images embody the machine’s understanding of the film, not as a series of pixels, but as a collection of present objects. The sound goes through a similar process of segmentation, grouping and reconstruction. Works in this series have been shown in Surrey, Vancouver, Victoria, Toronto and Montreal, Canada; Berlin, Germany; and been included in the Lumen Prize long-list in 2017 and 2018.

Additional-Image-1
Additional-Image-2
Additional-Image-3