October 2018 MF Tune of the Month

Voting is open for next month’s machine folk tune of the month.

Tonight on the Seine in Paris, a group of us will be playing the September 2018 MF Tune of the Month, The Silver Keyboard, at the 2018 ISMIR banquet. Recordings may follow!

Advertisements

An experimental album of Irish traditional music and computer-generated tunes

albumcover.jpgFor the past 6 months, the music album “Let’s Have Another Gan Ainm” has been distributed to reviewers and listeners in Europe and the USA as a new release of Irish traditional music. We are now publicly revealing that each track on the album includes computer-generated material, specifically material generated by our deep neural network folk-rnn.

Reviews of the album, both published and private, have been very positive. The album even received radio play. More information about our experiment and the music on the album (e.g., how each  came to be) can be found in our technical report. We show exactly what the computer generated and the changes that were made. More details about the reception of the album will be provided at a later time.

In the meantime, enjoy the album!

Result of the first folk-rnn Composition Competition

The winning piece in the first folk-rnn composition competition is Gwyl Werin for mixed quartet by Derri Joseph Lewis. He used a tune generated by folk-rnn as a basis for both melodic fragments and harmonic construction in his piece. He chose the model trained without the repeat signs, a 9/8 meter, C mixolydian mode, an initialisation of “D E F”, and a temperature of 1.07. This produced the output here.

The judges found Lewis’ piece well balanced using nice contrasts and a variety of textures and motives in its construction. The occasional solo moments in the piece echo aspects of the generated material, though it does not imitate it directly. This piece illustrates a further approach to utilising folkrnn as part of the creative process. (For a recent survey, see Sturm, Ben-Tal, et al., “Machine learning research that matters for music creation: A case study”, J. New Music Research 2018.) We look forward to hearing the piece played by the New Music Players in our upcoming concert in October at the O’Reilly AI conference in London.

Machine Learning Research that Matters for Music Creation: A Case Study

Our article, Sturm, Ben-Tal, Monaghan, Collins, Herremans, Chew, Hadjeres, Deruty and Pachet, “Machine Learning Research that Matters for Music Creation: A Case Study”, is now in press at the Journal of New Music Research. The accepted version can be found here: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233627

My one-line precis: We take several music generation models, apply them to music creation culminating in a public concert (May 23 2017, videos are at The Bottomless Tune Box Youtube page), and finally reflect broadly on the experience about how it matters for machine learning research and vice versa.

We used four different machine learning methods to compose a variety of musical works performed at the concert. We discuss the various technical and creative desicions made in the composition of the pieces. Each of the composers/musicians then reflects on the experience, answering questions about what machine learning contributed to their work, the roles of human and machine creativity and how they matter for the audience. We then summarise responses of the audience. The fifth section reflects on our total experience aligned with Kiri Wagstaff’s principles of making applied machine learning research matter:

  1. measure the concrete impact of an application of machine learning with practitioners in the originating problem domain;
  2. with the results from the first principle, improve the particular application of machine learning, the definition of the problem, and the domain of machine learning in general.

The penultimate section identified several ways our work contributes to machine learning research applied to music creation, or in general. In summary:

  1. Music creation via machine learning should be analysed along more varied dimensions than degrees of “success” or “failure”. A “successful” model (by quantitative measures of machine learning, e.g., cross-entropy) may still not generate interesting or useful music; and a “failing” model may result in creative opportunities. In any case, work with music experts/practitioners — it’s necessary and illuminating.
  2. A trained machine learning model that is useful and successful may still be totally naive of what it is doing. Work with music experts/practitioners to probe the “musical intelligence” of the model and its limits. This will reveal ways to improve the model, and make one’s discussion of the model more accurate and scientific.
  3. Music creators are particular and idiosyncratic. There is no “universal model” of music (just like there’s no “universal model” of dining). Hence, aim/expect to make machine learning models for music creation pliable enough for calibration to particular users. (How to calibrate a model without resorting to more data is an interesting research problem, and one of my goals in analysing the parameters of folkrnn models.)
  4. The data on which a model is trained does not necessarily limit its application. folk-rnn is trained on folk music of Ireland and the UK, but some of the music created with it doesn’t sound that way at all.
  5. In communities that design tools (software, hardware, analogue, etc.) for artists, it is probably well known that users will discover and exploit bugs and other unintended features in their work. (This fact motivates the requirement of backward compatibility of Csound in every update.) Expect unintended (mis)use, of a music creation model, and design them to encourage such opportunities.
  6. Music data is the remnants of a human activity. The machine learning researcher, in exploiting such data, has the responsibility to reflect on their use of it and its impact to the communities from which it comes. For instance, folk-rnn models are trained on thousands of transcriptions of folk music from Ireland. Our responsible use of that data involves appreciating and accurately portraying the living tradition, working with the data knowing that it is a deficient and distorted representation of the human experience, and working together with its experts and practitioners to assess the technical and ethical impacts of the research.

We hope that our article will serve as some sort of model for evaluating and thinking more broadly about applications of machine learning to music creation.