Deep lobotomies can be so musical

The holiday break finally gave me the time to recode the transcription generation of folk-rnn models. This made it 100 times faster (now about .4 seconds for generating an entire transcription), which makes a model far more amenable to abuse, or “creative experimentation”. For instance, here’s a short study I composed from a transcription generation created by feeding the network a linear combination of two one-hot vectors (corresponding to the output token and some random one):

There’s no scientific motivation for the way I changed the input, but I like how the network responded in a funny way that sounds distinctly unfolk.

More experimentations are certainly to come.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s