The holiday break finally gave me the time to recode the transcription generation of folk-rnn models. This made it 100 times faster (now about .4 seconds for generating an entire transcription), which makes a model far more amenable to abuse, or “creative experimentation”. For instance, here’s a short study I composed from a transcription generation created by feeding the network a linear combination of two one-hot vectors (corresponding to the output token and some random one):
There’s no scientific motivation for the way I changed the input, but I like how the network responded in a funny way that sounds distinctly unfolk.
More experimentations are certainly to come.