Presentation of Music Transcription Modelling and Composition Using Deep Learning

Here are the slides accompanying our presentation at the 1st Conference on Computer Simulation of Musical Creativity.

The three papers of the session all apply deep recurrent neural networks to the modeling of high-level music representations, e.g., sequences of pitches and durationes, sequences of chords, and sequences of distilled ABC tokens.

We had the benefit of being the last of the three papers, which meant we didn’t have to focus on the machine learning particulars, and how these models are trained, but could focus on the application of our models, and interesting questions about evaluating the systems.

Our code and data is available here:


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s