I am an Associate Professor in the Speech, Music and Hearing Division of the School of Electronic Engineering and Computer Science at the Royal Institute of Technology KTH, Sweden. Before that I was a Lecturer in Digital Media at the Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London.
My research interests include digital signal processing for sound and music signals, machine listening, evaluation, and algorithmic composition. I currently have an ERC Consolidator Grant: MUSAiC: Music at the Frontier of Artificial Creativity and Criticism (ERC-2019-COG No. 864189). Here’s my google scholar page. Here’s my soundcloud page. Here’s my YouTube channel. Here’s some of my caricatures. I also used to waterpaint while I run.
Pingback: MusICA Seminars
Hello there. It seems you and me have had a nearly identical thought process regarding RNNs and large quantities of ABC files. I took a similar approach earlier this year, but restricted the training set to reels, and also included some of the other large databases (norbeck etc..). After a bit of tweaking I achieved some interesting results (along with a fair amount of rubbish). Here’s a friend of mine playing one RNN composition that took his fancy https://youtu.be/hmvbr8U5yb0. I’ve also been experimenting with Swedish music, and also moving away from ABC to a more easily parsed music format (with mixed results). It would be good to chat about our approaches. Cheers, Ben
LikeLike
Hi Ben. Thanks for the comment. I remember when I was learning the melodeon in 2009, I was a regular contributor to the “Tune of the Month” at http://forum.melodeon.net/. I remember GbHandlebar since he was contributing these crazy videos. (My contributions are here: https://www.youtube.com/channel/UCL-dVzMfnJAbIwKLF2-mnHw) Funny that he is your friend!
Are you in London?
LikeLike
I’m not far from London, in Surrey. Cool that we’ve both taken a very similar approach, and with a similar genre of music. If you’re still looking into this I’d be interested in sharing some of my results, and discussing how this approach could be improved further. Drop me an email if you’d like to talk further..
LikeLike
Pingback: One Year Already??? No Way… | Rhapsody in µ
Hi there, can you take a look at my https://soundcloud.com/alejandro-ruiz-218 and tell me what you think? It’s kinda like what you are doing but for pop-rock songs and without Magenta.
LikeLike
Hi – I have been working on a similar concept using ABC collections to analyse and produce new tunes.
My work so far has been solely on jigs, and is producing tunes very well.
Is there a forum for people working on RNN/autogen music where we can share ideas etc?
Example of my jigcreator output here: https://soundcloud.com/user-450231867/plattedlegsind
LikeLike
Hi Colin. Nice jig! What do you play? We are creating a forum: https://themachinefolksession.org/ It’s in beta right now, but feel free to give it a try!
LikeLike
Thanks Bob.
I mostly play mandolin with a bit of banjo when needed.
I’ll check the forum out later today.
LikeLike
Hey i wanted to use and create some folk music with the GitHub code, i could not figure out which python code is to use (setting up a conda env) and which requirements, how to train, and how to generate and also how to figure out which commit is version 1, 2 and 3.
i know that you also does only use the code and are not a developer i hope you can help me out with that information i also couldn’t find any explanation. Thanks in advance
LikeLike
Try this: https://folkrnn.org/
LikeLike
i have all set up, conda python ect i just need short explanation for using the code why is there nowhere any information. im doing it as a hobby so its no way for me just using a webpage that already has all built in :p
LikeLike
Try this. Once you have booted anaconda, cd to the folk-rnn source, and type “python train_rnn.py config5 data/allabcwrepeats_parsed”
LikeLike
thanks i could handle to create some tunes, the training did not work i get TypeError exception “TypeError: ‘numpy.float64’ object cannot be interpreted as an index” , the tunes i generate and hear them are so different from “http://www.eecs.qmul.ac.uk/~sturm/research/RNNIrishTrad/index.html”
like the really nice fancy beat is missing, mine tune created sounds more like the one from https://folkrnn.org/ is the training data different? or is it mixing 2 tunes together im so interested in figuring it out, is there a better way than in this blog to ask these questions, sorry for that if you dont like to see it here
LikeLike
I need more info than that. What is the complete error message? The synthesis of the output of folkrnn for the Endless Session is done by a different program I have written. You can ask me questions by emailing me. A google search will turn that up.
LikeLike