Result of the first folk-rnn Composition Competition

The winning piece in the first folk-rnn composition competition is Gwyl Werin for mixed quartet by Derri Joseph Lewis. He used a tune generated by folk-rnn as a basis for both melodic fragments and harmonic construction in his piece. He chose the model trained without the repeat signs, a 9/8 meter, C mixolydian mode, an initialisation of “D E F”, and a temperature of 1.07. This produced the output here.

The judges found Lewis’ piece well balanced using nice contrasts and a variety of textures and motives in its construction. The occasional solo moments in the piece echo aspects of the generated material, though it does not imitate it directly. This piece illustrates a further approach to utilising folkrnn as part of the creative process. (For a recent survey, see Sturm, Ben-Tal, et al., “Machine learning research that matters for music creation: A case study”, J. New Music Research 2018.) We look forward to hearing the piece played by the New Music Players in our upcoming concert in October at the O’Reilly AI conference in London.

Advertisements

Machine Learning Research that Matters for Music Creation: A Case Study

Our article, Sturm, Ben-Tal, Monaghan, Collins, Herremans, Chew, Hadjeres, Deruty and Pachet, “Machine Learning Research that Matters for Music Creation: A Case Study”, is now in press at the Journal of New Music Research. The accepted version can be found here: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233627

My one-line precis: We take several music generation models, apply them to music creation culminating in a public concert (May 23 2017, videos are at The Bottomless Tune Box Youtube page), and finally reflect broadly on the experience about how it matters for machine learning research and vice versa.

We used four different machine learning methods to compose a variety of musical works performed at the concert. We discuss the various technical and creative desicions made in the composition of the pieces. Each of the composers/musicians then reflects on the experience, answering questions about what machine learning contributed to their work, the roles of human and machine creativity and how they matter for the audience. We then summarise responses of the audience. The fifth section reflects on our total experience aligned with Kiri Wagstaff’s principles of making applied machine learning research matter:

  1. measure the concrete impact of an application of machine learning with practitioners in the originating problem domain;
  2. with the results from the first principle, improve the particular application of machine learning, the definition of the problem, and the domain of machine learning in general.

The penultimate section identified several ways our work contributes to machine learning research applied to music creation, or in general. In summary:

  1. Music creation via machine learning should be analysed along more varied dimensions than degrees of “success” or “failure”. A “successful” model (by quantitative measures of machine learning, e.g., cross-entropy) may still not generate interesting or useful music; and a “failing” model may result in creative opportunities. In any case, work with music experts/practitioners — it’s necessary and illuminating.
  2. A trained machine learning model that is useful and successful may still be totally naive of what it is doing. Work with music experts/practitioners to probe the “musical intelligence” of the model and its limits. This will reveal ways to improve the model, and make one’s discussion of the model more accurate and scientific.
  3. Music creators are particular and idiosyncratic. There is no “universal model” of music (just like there’s no “universal model” of dining). Hence, aim/expect to make machine learning models for music creation pliable enough for calibration to particular users. (How to calibrate a model without resorting to more data is an interesting research problem, and one of my goals in analysing the parameters of folkrnn models.)
  4. The data on which a model is trained does not necessarily limit its application. folk-rnn is trained on folk music of Ireland and the UK, but some of the music created with it doesn’t sound that way at all.
  5. In communities that design tools (software, hardware, analogue, etc.) for artists, it is probably well known that users will discover and exploit bugs and other unintended features in their work. (This fact motivates the requirement of backward compatibility of Csound in every update.) Expect unintended (mis)use, of a music creation model, and design them to encourage such opportunities.
  6. Music data is the remnants of a human activity. The machine learning researcher, in exploiting such data, has the responsibility to reflect on their use of it and its impact to the communities from which it comes. For instance, folk-rnn models are trained on thousands of transcriptions of folk music from Ireland. Our responsible use of that data involves appreciating and accurately portraying the living tradition, working with the data knowing that it is a deficient and distorted representation of the human experience, and working together with its experts and practitioners to assess the technical and ethical impacts of the research.

We hope that our article will serve as some sort of model for evaluating and thinking more broadly about applications of machine learning to music creation.

The Machine Folk Session passes 200 tunes

At some point every morning I have a listen to a random set of seven tunes on the Endless folk-rnn Traditional Music Session, and nearly every time I hear one or two things that are worth keeping. I transfer these to The Machine Folk Session website, which as a result now has 200 machine folk tunes (many of which have been uploaded by users of folkrnn.org). I put three of my favorites below.

Here’s my setting of “Jump at the Sun” generated by the v1 system (which also generated the title):

JumpattheSun.png

Ideally, the musicians who are not playing should jump where it’s obvious.

Here’s my setting of tune 189 by the v3 folk-rnn system:

189v3.pngThe system has created an AABBCC structure, with a very different B, and a 4-bar C part.

Here’s my setting of tune 5292 by the v2 folk-rnn system:

5292v2.png

Quite some harmonic adventurousness from this little system!

 

 

 

Summer’s Almost Done and Gone

Here’s my contribution to The Machine Folk Session Tune of the Month for August 2018:

I think it turned out to be a nice tune with just a few adjustments. Here’s the original tune, produced by folkrnn:

MFTOTMorig.png

The weakest part of this tune to me is bars 11-12. I felt repeating the idea of bar 10 with a descending harmony is just perfect to describe how I feel at the close of summer. A further adjustment is just making first and second endings to each part. Here’s my tune:

MFTOTMbob.png