About

43561661_1005509949620531_741187509907619840_n
I am an Associate Professor in the Speech, Music and Hearing Division of the School of Electronic Engineering and Computer Science at the Royal Institute of Technology KTH, Sweden. Before that I was a Lecturer in Digital Media at the Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London.

My research interests include digital signal processing for sound and music signals, machine listening, evaluation, and algorithmic composition. I currently have an ERC Consolidator Grant: MUSAiC: Music at the Frontier of Artificial Creativity and Criticism (ERC-2019-COG No. 864189). Here’s my google scholar page. Here’s my soundcloud page. Here’s my YouTube channel. Here’s some of my caricatures. I also used to waterpaint while I run.

15 thoughts on “About

  1. Pingback: MusICA Seminars

  2. Hello there. It seems you and me have had a nearly identical thought process regarding RNNs and large quantities of ABC files. I took a similar approach earlier this year, but restricted the training set to reels, and also included some of the other large databases (norbeck etc..). After a bit of tweaking I achieved some interesting results (along with a fair amount of rubbish). Here’s a friend of mine playing one RNN composition that took his fancy https://youtu.be/hmvbr8U5yb0. I’ve also been experimenting with Swedish music, and also moving away from ABC to a more easily parsed music format (with mixed results). It would be good to chat about our approaches. Cheers, Ben

    Like

  3. Pingback: One Year Already??? No Way… | Rhapsody in µ

  4. Hey i wanted to use and create some folk music with the GitHub code, i could not figure out which python code is to use (setting up a conda env) and which requirements, how to train, and how to generate and also how to figure out which commit is version 1, 2 and 3.
    i know that you also does only use the code and are not a developer i hope you can help me out with that information i also couldn’t find any explanation. Thanks in advance

    Like

  5. thanks i could handle to create some tunes, the training did not work i get TypeError exception “TypeError: ‘numpy.float64’ object cannot be interpreted as an index” , the tunes i generate and hear them are so different from “http://www.eecs.qmul.ac.uk/~sturm/research/RNNIrishTrad/index.html”
    like the really nice fancy beat is missing, mine tune created sounds more like the one from https://folkrnn.org/ is the training data different? or is it mixing 2 tunes together im so interested in figuring it out, is there a better way than in this blog to ask these questions, sorry for that if you dont like to see it here

    Like

    • I need more info than that. What is the complete error message? The synthesis of the output of folkrnn for the Endless Session is done by a different program I have written. You can ask me questions by emailing me. A google search will turn that up.

      Like

Leave a comment