Computed Curation is a photobook created by a computer. Taking the human editor out of the loop, it uses machine learning and computer vision tools to curate a series of photos from an archive of pictures.
Considering both image content and composition — but through the sober eyes of neural networks, vectors and pixels — the algorithms uncover unexpected connections and interpretations that a human editor might have missed.
Machine learning based image recognition tools are already adept at recognizing training images (umbrella, dog on a beach, car), but quickly expose their flaws and biases when challenged with more complex input. In Computed Curation, these flaws surface in often bizarre and sometimes poetic captions, tags and connections. Moreover, by urging the viewer to constantly speculate on the logic behind its arrangement, the book teaches how to see the world through the eyes of an algorithm.
To browse the book open the web version here or download the PDF.
Process
The book features 207 photos taken between 2013 to 2017. Metadata is collected through Google’s Cloud Vision API (tags, colors), Microsoft’s Cognitive Services API(captions) and Adobe Lightroom (date, location). Composition is analyzed using HOGs1.
Considering more than 850 variables for each photo, a t-SNE2 algorithm arranges the pictures in two-dimensional space according to similiarities in content, color and composition. A genetic TSP algorithm3 computes a shortest path through the arrangement, thereby defining the page order.
For details on the project, please see this page.
Video and Images Courtesy of Philipp Schmitt