To create this piece, a Long Short Memory Cell network was trained to predict audio one sample at a time in the following way. The model is shown several seconds of input audio, then it is asked to predict a single audio sample. The input is treated as a FIFO queue with the predicted sample being appended to its end and the first sample being popped off. By saving only the prediction portions, the model creates arbitrarily long passages of new audio that are heavily colored by the input, but not identical to any part of it. In this piece the input data is a short passage played on the guitar. This content is accompanied by the audio the model produced. The predicted audio is presented unchanged aside from some minor filtering, but arranged in an overlapping fashion to produce a full accompaniment. A more detailed description of the algorithm is available at apfalz.github.io/rnn/rnn_demo.