Deep learning for assisting the process of music composition (part 3)

This is part 3 of my explorations of using deep learning for assisting the process of music composition. In this part, I look at creating things other than strict folk tunes with a model trained by deep learning methods on over 23,000 folk tunes. Part 1 is here. Part 2 is here.

Does the world really need tens of thousands of new reels and jigs? Maybe or maybe not; but my main motivations for composition are to create musical experiences, solve puzzles, learn, and be funny/dramatic. Toward these ends, I am finding that this music generation system can provide a wealth of materials and ideas. Here are some examples.

The system under study generated this curious little output:

Not a tiptop imitation when it comes to Western folk music, but it immediately brought to my mind drum and fife music, as well as music like that performed by Indian brass bands like the great Jaipur Kawa Brass Band. So, with a little reorchestrating, editing, and effects, we transform it into something like a passing marching band:

The system generated another failed emulation of Western folk music:

and it gave me the idea to create an antiphonal duet. I enjoy the improvisatory feeling of the playing.

Our model generated a piece it calls, “A Fhsoilah Kilnie”, which is made a right mess by the guitarist and flutist.

I don’t know what the system was “thinking.” However, administering some major changes with my certified artistic license and now we have a serious piece with integrity. Bonus: it’s dancable for very agile penguins and the occassional grumpy elephant seal.

Finally, when something tells me to listen to a piece titled, “A Bump Of Howled Sho The fetch”, I have the expectation of something dramatic. Our system generated such a piece, to which our session performers do no justice.

Instead, I layer all of my favorite sounds, and then layer them again but amplified, to make a real big bump of howled shoing all fetches everywhere.

No doubt those fetches are now shoed by a massive bump of howled.

Weak contracts and machine learning: A presentation by Léon Bottou

These ICML2015 slides of Léon Bottou, Two high stakes challenges in machine learning, make several great points. The first is that the train/test paradigm in machine learning/artificial intelligence actually embodies creating systems having a weak “contract”. An example Bottou gives is of an object recognition system that is advertised with some accuracy. If one submits to that system data differing from the test set distribution, nonsense will result, and the system no longer works. This is in comparison to a sorting algorithm, which can sort any numerical data no matter its composition. The sorting algorithm thus does not have a weak contract. The answer of “more data” to improve performance of a system with a weak contract is empty, given the bias that seems to necessarily result.

The second point Bottou makes is that machine learning/artificial intelligence is all three: exact science, experimental science, and engineering. It is necessary that it is all three; however, trouble can arise when the “genres” are mixed. For instance, claiming that a system with a some estimated test error proves something of an exact nature. Third, Bottou points out that the experimental science of machine learning/artificial intelligence has been “dominated” by the train/test experimental paradigm … and this is challenging “the speed of our scientific progress.”

Bottou motivates increasing the ambitions of machine learning/artificial intelligence from building systems with weak contracts (reproducing X amount of ground truth of a dataset), to building systems that learn concepts: “In fact, a system that recognizes a “concept” fulfils a stronger contract than a classifier that works well under a certain distribution.” Bottou also recognizes that such an increased ambition necessarily leads to evaluation that is not as convenient as comparing labels to ground truth.

Bottou’s presentation encompasses much of what I am saying in machine music listening. I think we all want systems to learn concepts. Measuring the amount of ground truth reproduced by a system is no relevant measure of that. The train/test paradigm must be replaced.

How music recommendation works — and doesn’t work | Brian Whitman @ variogr.am

A nice blog post summarising many important points.

Can a computer really listen to music? A lot of people have promised it can over the years, but I’ve (personally) never heard a fully automated recommendation based purely on acoustic analysis that made any sense – and I’ve heard them all, from academic papers to startups to our own technology to big-company efforts. And that has a lot to do with the expectations of the listener.

via How music recommendation works — and doesn't work | Brian Whitman @ variogr.am.

Clever Hans, Clever Algorithms: Are your machine learnings learning what you think?

I had a nice time delivering a talk at the London Big-O Meet Up the other day — which can be seen at the Skills Matter website. The discussion afterward gave me some great perspectives from people in industry, such as Beautiful Destinations, facebook, and a variety of data science start-ups… not to mention the fascinating world of kaggle competitions. This meet up is going in my calendar!

My slides are here.

PhD Studentship in Intelligent Machine Music Listening

Please have a look here: http://www.eecs.qmul.ac.uk/phd/research-topics/funded#phd-studentship-in-intelligent-machine-music-listening.

Applications are invited for a fully-funded PhD studentship, to seek ways to exploit novel and holistic approaches to evaluation for building machine music listening systems (and constituent parts). A major emphasis will be on answering “how” systems work and “what” they have learned to do, in relation to the success criteria of real-world use cases. The research will involve working at the intersection of digital signal processing, machine learning, and the design and analysis of experiments.

All nationalities are eligible to apply for this studentship, which will start in Autumn 2015. The studentship is for three years, and covers student fees as well as a tax-free stipend of £15,863 per annum.

Candidates must have a first-class honours degree or equivalent, or a good MSc Degree in Computer Science, Electronic Engineering, or Mathematics. Candidates should be confident in digital signal processing or machine learning, and have programming experience in, e.g. R, MATLAB, or Python. Experience in research and a track record of publications is very advantageous. Formal music training is also advantageous.

The PhD supervisors will be Dr. Bob L. Sturm (Machine Listening) and Dr. Hugo Maruri-Aguilar (Statistics). Please see http://www.eecs.qmul.ac.uk/~sturm for background. The project will be based in the School of EECS, and the student will become a member of the interdisciplinary Centre for Digital Music. Informal enquiries can be made by email to Dr. Sturm (b.sturm@qmul.ac.uk).

To apply, please follow the on-line process (http://www.qmul.ac.uk/postgraduate/apply) by selecting ‘Electronic Engineering’ in the ‘A-Z list of research opportunities’ and following the instructions on the right-hand side of the web page.

Please note that instead of the ‘Research Proposal’ we request a ‘Statement of Research Interests’. Your statement should answer two questions: (i) Why are you interested in the topic described above? (ii) What relevant experience do you have? Your statement should be brief: no more than 500 words or one side of A4 paper. In addition we would also like you to send a sample of your written work. This might be a chapter of your final year dissertation, or a published conference or journal paper. More details can be found at: http://www.eecs.qmul.ac.uk/phd/apply.php

The closing date for the applications is 1/05/15.

Interviews are expected to take place /15.during the week of 15/06