NeuralFunk is an experiment in using deep learning for sound design. It is an experimental track entirely made from samples that were synthesized by neural networks. It is not music made by AI, but music made using AI as a tool for exploring new ways of creative expression.
Two types of neural networks were used in the creation of the samples, a VAE trained on spectrograms and a WaveNet (which could additionally be conditioned on spectrogram embeddings from the VAE). Together these networks provided numerous tools for generating new sounds, from reimagining existing samples or combining multiple samples into unique sounds, to dreaming up entirely new sounds completely unconditioned.
The resulting samples were then used to produce the final track. The title NeuralFunk is inspired by the drum & bass sub-genre Neurofunk which was what I initially had in mind. But over the course of the project it turned into something more experimental, matching the experimental nature of the sound design process itself.
A full description of how the track was made can be found here: https://
towardsdatascience.com/ neuralfunk-combining-deep- learning-with-sound-design- 91935759d628