The Drunken Pint, a folk-rnn original

Now, here’s a wild tune composed by our folk-rnn system:

T: Drunken Pint, The
M: 2/4
L: 1/8
K: Gmaj
|: G |B/c/d =c>B | AB/G/G B/A/ | G/A/G/F/ ED | Be/d/ dB/A/ |
G>D Bd | c2 B2 | A/B/A/G/ F/G/A/c/ | BG G :|
|: A/B/ |c2 E/F/G | ed- de | c>B AG | G>^F GA |
BG E>D | EG FE | D/^c/d/^c/ d^c | d3- :|

Here is a slightly cleaned version in common practice notation for those playing at home.

drunken.png

I found this tune recently among the 70,000+ transcriptions we had the system generate in August 2015. (Actually, this tune come from a model I built using char-rnn applied to transcriptions I culled from thesession.org.) Anyhow, the title is what caught my eye at first — a title created entirely by the system. Then I was happy to see that the tune has an AABB structure, and the system was smart enough to deal with those two odd quaver pickups. It was until I learned to play it that I really begain to appreciate it. What a fun drunken riot this little system has crafted!

Now who wants to create the drunken dance that this piece should accompany??

PROCLAMATION TO PROTECT MY STUDENTS FROM THE FASCISTS CONTROLLING THE UNITED STATES

By the authority vested in me as a leader in research and education in the currently United Kingdom, and to protect my students from the fascists now controlling the formerly United States, it is hereby ordered as follows:

     Section 1Purpose.  The wholesale scapegoat of an entire class of people based on their origin for the actions of a specific few, while ignoring the carnage caused by homegrown good old boy terrorists like Dylan Roof and Governor Rick Snyder, is so god damn fucking backward that those fucks must be enjoying the view of the polyp-infested large intestines they call “brains”.

     Sec. 2Policy.  It has always been my policy to protect my students from fucking fascists, and so I must now suspend any and all activities related to publishing at or attending any academic research conferences in the USA until that time where those flag fucking flint bits see fit to fuck the fuck off.

     Sec. 3Conclusion.  Ignorance always loses. Now let’s get back to work!

PhD positions open!

Our PhD in Media and Arts Technology is an innovative inter-disciplinary programme in the sciences and technologies that transform the creative sector, with a special focus on Sound, Music, Media and Interaction. Its mission is to produce post-graduates that have world-class skills in technical and scientific research, creativity, building and using software and hardware, and are prepared to contribute to the world’s Digital Economy.

Our four year PhD programme combines PhD research with taught modules and a five month placement at an industry host or research group. We welcome applications from a range of backgrounds. What we require is some clear evidence of both technical and creative abilities.

Apply here:
http://www.mat.qmul.ac.uk/programmes/phd-programme/

Taking A Christmas Carol Toward the Dodecaphonic by Derp Learning

Last winter, I experimented with our folk-rnn system for completing well-known Christmas carols. That resulted in my little composition, “We three layers o’ hidd’n units are”, composed with the assistance of our character-based model:

I have decided to do the same this year, but instead using our token-based approach, and making use of intermediate models coming from tuning our pretrained system to act more “dodecaphonic” — essentially, motivating it to use equally all pitches of the equal tempered scale. My collaborator Oded generated 1200 transcriptions using a serial approach (a supercollider program generates a tone row, and then repeats this 4 times with subsequent appearances undergoing transposition inversion and retrograde at random). We then began to tune our folk-specialised system with this dataset. This tuning process involves minibatches of 64 transcriptions, 5% of the dataset as validation, a learning rate of 0.0003, and using no dropout. We save the model parameters after each epoch.

I use the abc code of the first three bars of the carol, “It came upon a midnight clear”, to initialise each generative system, i.e., ‘M:6/8 K:Cmaj G, |: E 2 D D C A, | G, 2 A, G, 2 G, | A, B, C C D E |’. I then curate from among the generated materials, and arrange it to create my 2016 Christmas composition, “It Came Out From A Pretrained Net” (for flute, clarinet, bassoon, three French horns, trumpet, and something like 20 handbells):

Here is the score. I use red boxes to mark the material generated by each of our  models. I make no pitch adjustments to the material in those boxes, save octave transpositions. The opening subject comes from the pre-trained folk-rnn model. The subjects following come one by one from increasing epochs, ending with material generated from the 12th epoch. (starting m. 102). It is really fun to hear the system begin to move off the tonal cliff.

Merry Christmas and Happy Season! See you in the new year.

DSP Exam Time!

One of my favorite times of year is coming up with exam questions that will test the mettle of my students, not to mention the core of their understanding the fundamentals. I am really pleased with the question below. Go ahead and try it yourself!

filterquestion