This video explores and expands upon the technique popularly known as “deep dream,” an iterative process for optimizing the pixels of an image so as to obtain a desired activation state in a trained convolutional network. It primarily experiments with the dynamics of feedback-generated deep dream videos, wherein each frame is initialized by the previous one. By gating (or “masking”) the pixel gradients of multiple channels and mixing them via pre-determined masking patterns, while simultaneously distorting the input canvas, one can achieve novel aesthetics.
A number of strategies are directly inspired by the initial work of Google’s Deep Dream implementation, particularly the work of Mike Tyka who first experimented with feedback, canvas distortion (zooming), and mixing gradients of multiple channels. The trained network used is Google’s Inceptionism network. The workflow for generating these is under continuous development, and future improvements include a more generalized canvas distortion function, and improved masking from source images.
Video Courtesy of Gene Kogan