Partnerships Tickets, Tue, 23 May 2017 at 19:00 | Eventbrite

This unique concert will feature works created with computers as creative partners drawing on a uniquely human tradition: instrumental folk music. We aren’t so interested in whether a computer can compose a piece of music as well as a human, but instead how we composers and musicians can use artificial intelligence to explore creative domains we hadn’t thought of before. This follows on from recent sensational stories of artificial intelligence making both remarkable achievements — a computer beating humans at Jeopardy! — and unintended consequences — a chatbot mimicking racist tropes. We are now living in an age, for better or worse, when artificial intelligence is seamlessly integrated into the daily life of many. It is easy to feel surrounded and threatened, but at the same time empowered by these new tools.

Our concert is centred around a computer program we have trained with over 23,000 “Celtic” tunes — typically played in pubs and festivals around Ireland, France and the UK. We will showcase works involving composers and musicians co-creating music with our program, drawing upon the features it has learned from this tradition, and combining it with human imagination. A trio of traditional musicians led by master Irish musician Daren Banarsë will play one set of computer-generated “Celtic” tunes. Ensemble x.y will perform a work by Oded Ben-Tal, which is a 21st century homage to folk-song arrangements from composers such as Brahms, Britten and Berio. They will also perform a work by Bob L. Sturm created from material the computer program has self-titled “Chicken.” Another piece you will hear involves two computer programs co-creating music together: our system generates a melody and another system harmonises it in various styles it has learned, e.g., a Bach choral. Our concert will provide an exciting glimpse into how new musical opportunities are enabled by partnerships: between musicians from different traditions; between scientists and artists; and last, but not least, between humans and computers.

Source: Partnerships Tickets, Tue, 23 May 2017 at 19:00 | Eventbrite

Two upcoming events!

promoimage.png

  1. Saturday March 25, 12-14h, at the London Southbank University, there will be a special workshop as part of the Inside Out Festival where a trio of master Irish musicians will play traditional music generated by our system, folk-rnn. Participants are invited to bring their own instruments and learn one of these new tunes. Registration will be announced shortly.
  2. Tuesday May 23, 19-20h30, at QMUL, there will be a unique concert featuring works created with computers as creative partners drawing on a uniquely human tradition: instrumental folk music. We aren’t so interested in whether a computer can compose a piece of music as well as a human, but instead how we can use artificial intelligence to explore creative domains we hadn’t thought of before. Tickets will be available shortly. Tickets.

The Drunken Pint, a folk-rnn original

Now, here’s a wild tune composed by our folk-rnn system:

T: Drunken Pint, The
M: 2/4
L: 1/8
K: Gmaj
|: G |B/c/d =c>B | AB/G/G B/A/ | G/A/G/F/ ED | Be/d/ dB/A/ |
G>D Bd | c2 B2 | A/B/A/G/ F/G/A/c/ | BG G :|
|: A/B/ |c2 E/F/G | ed- de | c>B AG | G>^F GA |
BG E>D | EG FE | D/^c/d/^c/ d^c | d3- :|

Here is a slightly cleaned version in common practice notation for those playing at home.

drunken.png

I found this tune recently among the 70,000+ transcriptions we had the system generate in August 2015. (Actually, this tune come from a model I built using char-rnn applied to transcriptions I culled from thesession.org.) Anyhow, the title is what caught my eye at first — a title created entirely by the system. Then I was happy to see that the tune has an AABB structure, and the system was smart enough to deal with those two odd quaver pickups. It was until I learned to play it that I really begain to appreciate it. What a fun drunken riot this little system has crafted!

Now who wants to create the drunken dance that this piece should accompany??

PROCLAMATION TO PROTECT MY STUDENTS FROM THE FASCISTS CONTROLLING THE UNITED STATES

By the authority vested in me as a leader in research and education in the currently United Kingdom, and to protect my students from the fascists now controlling the formerly United States, it is hereby ordered as follows:

     Section 1Purpose.  The wholesale scapegoat of an entire class of people based on their origin for the actions of a specific few, while ignoring the carnage caused by homegrown good old boy terrorists like Dylan Roof and Governor Rick Snyder, is so god damn fucking backward that those fucks must be enjoying the view of the polyp-infested large intestines they call “brains”.

     Sec. 2Policy.  It has always been my policy to protect my students from fucking fascists, and so I must now suspend any and all activities related to publishing at or attending any academic research conferences in the USA until that time where those flag fucking flint bits see fit to fuck the fuck off.

     Sec. 3Conclusion.  Ignorance always loses. Now let’s get back to work!

PhD positions open!

Our PhD in Media and Arts Technology is an innovative inter-disciplinary programme in the sciences and technologies that transform the creative sector, with a special focus on Sound, Music, Media and Interaction. Its mission is to produce post-graduates that have world-class skills in technical and scientific research, creativity, building and using software and hardware, and are prepared to contribute to the world’s Digital Economy.

Our four year PhD programme combines PhD research with taught modules and a five month placement at an industry host or research group. We welcome applications from a range of backgrounds. What we require is some clear evidence of both technical and creative abilities.

Apply here:
http://www.mat.qmul.ac.uk/programmes/phd-programme/

Taking A Christmas Carol Toward the Dodecaphonic by Derp Learning

Last winter, I experimented with our folk-rnn system for completing well-known Christmas carols. That resulted in my little composition, “We three layers o’ hidd’n units are”, composed with the assistance of our character-based model:

I have decided to do the same this year, but instead using our token-based approach, and making use of intermediate models coming from tuning our pretrained system to act more “dodecaphonic” — essentially, motivating it to use equally all pitches of the equal tempered scale. My collaborator Oded generated 1200 transcriptions using a serial approach (a supercollider program generates a tone row, and then repeats this 4 times with subsequent appearances undergoing transposition inversion and retrograde at random). We then began to tune our folk-specialised system with this dataset. This tuning process involves minibatches of 64 transcriptions, 5% of the dataset as validation, a learning rate of 0.0003, and using no dropout. We save the model parameters after each epoch.

I use the abc code of the first three bars of the carol, “It came upon a midnight clear”, to initialise each generative system, i.e., ‘M:6/8 K:Cmaj G, |: E 2 D D C A, | G, 2 A, G, 2 G, | A, B, C C D E |’. I then curate from among the generated materials, and arrange it to create my 2016 Christmas composition, “It Came Out From A Pretrained Net” (for flute, clarinet, bassoon, three French horns, trumpet, and something like 20 handbells):

Here is the score. I use red boxes to mark the material generated by each of our  models. I make no pitch adjustments to the material in those boxes, save octave transpositions. The opening subject comes from the pre-trained folk-rnn model. The subjects following come one by one from increasing epochs, ending with material generated from the 12th epoch. (starting m. 102). It is really fun to hear the system begin to move off the tonal cliff.

Merry Christmas and Happy Season! See you in the new year.