Only a few tickets left to this unique event!
September 19, 2016 saw the successful premier edition of HORSE2016: On “Horses” and “Potemkin Villages” in Applied Machine Learning. I have now uploaded videos to the HORSE2016 YouTube channel, and posted slides to the HORSE2016 webpage. I embed the videos below with some commentary.
HORSE2016 had 10 speakers expound on a variety of interesting topics, and about 60 people in the audience. I am extremely pleased that the audience included several people from outside academia, including industry, government employees and artists. This shows how many have recognised the extent to which machine learning and artificial intelligence are impacting our daily lives. The issues explored at HORSE2016 are essential to ensuring this impact remains beneficial and not detrimental.
Here is my introductory presentation, “On Horse Taxonomy and Taxidermy”. This talk is all about “horses” in applied machine learning: what are they? Why is this important and relevant today? Why the metaphor, and why is it appropriate? I present an example “horse,” uncovered using an intervention experiment and a generation experiment. Finally, I discuss what a researcher should do if someone demonstrates their system is a “horse”.
Roisin Loughran presents, “When the means justifies the end: Why we must evaluate on more than mere output”. This talk is about systems that compose music, what it means to say they are “musically intelligent”, and the difficulties inherent to evaluating them. She presents some examples featuring the application of evolutionary algorithms to composing melodies.
Mathieu Lagrange presents, “Computational experiments in Science: Horse wrangling in the digital age”. This talk is about “horses” in audio content analysis, with specific examples in source separation using non-negative matrix factorisation, and acoustic scene analysis. Lagrange talks more widely about ways to improve research practices. He discusses the web-based tool SimScene for creating simulated acoustic scenes for testing machine listening systems. He also presents expLanes, a tool for facilitating and automating better practices in research, including reproducibility.
Tim Hospedales presents, “Gated Neural Networks for Option Pricing: Enforcing Sanity in a Black Box Model: Enforcing sanity in a black box model.” This talk is about adapting the architecture and training of a neural network to satisfy six sanity conditions in the learnt model. Sanity is crucial here because they want to explain the market and not just predict it. It is wrong to believe that sanity will come about just by training on large amounts of real data. Domain knowledge is essential. In the end, one has a model that makes good predictions with guarantees of sanity and generalisation.
Prof Geraint A. Wiggins presents the HORSE2016 keynote, “Trying to be Accurate; or, On the Prevention of Horses”. Prof Wiggins talks broadly about descriptive models and explanatory models, their use and role in science and engineering, and their application to language and music.
Francisco Rodríguez-Algarra presents, “You don’t hear a thing… but my Horse knows it’s Rock!” This talk is about explaining the behaviour of a machine listening system through system analysis and intervention experiments. Rodríguez demonstrates this for a specific machine music listening system that appears to be highly successful for labeling music according to “genre”. He shows how the system is actually a “horse”, i.e., appearing to solve a problem but actually is not.
Jeff Clune presents, “How much do deep neural networks understand about the images they recognize?” This talk is about the success of deep neural networks for image content analysis, and answering the question of how they are working. Clune suggests this to be the dawn of the field of “AI Neuroscience”, and draws a parallel between how neuroscientists and his team are working to uncover what neurons (real or digital) are responding to. He presents a series of four episodes of “deep visualisation”, each uncovering more and more evidence of that deep neural networks trained on natural images are understanding high level image concepts and contexts, such as swimming trunks, bell peppers and hamburgers. (The high resolution slides can be downloaded here.)
Ricardo Silva presents, “The role of causal inference in machine learning”. This talk introduces causal inference, and discusses how an observational study can be recast as an experimental one through which one can infer causal effects. This involves an assumption about the intervention, and tricks using graphical models. One trick is the “backdoor adjustment” whereby conditioning on a common cause allows one to estimate the effect of the treatment. Another trick is using “instrumental variables”, which allows one to infer bounds on the causal effects via a variable that is known to affect only the treatment. Silva presents several examples highlighting these tricks, and showing how causal inference can be used to overcome barriers to implementing randomised controlled trials.
Ian Goodfellow presents, “Adversarial Examples and Adversarial Training”. This talk discusses adversarial examples, with specific instances in image content analysis. Goodfellow explains how they arise from overfitting or excessive linearity in a model. He presents several examples drawn from training on different image datasets. He shows evidence that training models with adversarial examples in the loop can increase the robustness of the learnt model to adversarial attacks of the same kind. Finally, Goodfellow presents his new library cleverhans. (Unfortunately, the video capture system cut off the last two minutes of this presentation.)
On September 14, 2016, QMUL welcomed the French contingent of DaCaRyH for a focused day of updates, discussions, an invited talk, a concert and some excellent curry.
The day began with an update from the UK partners (Bob L. Sturm, Oded Ben-Tal and
Elio Quinton). Bob and Oded discussed preliminary work on music transcription modeling using recurrent neural networks, and in particular the evaluation of such models. While this preliminary work involves music using Celtic idioms (because we have the data), it encompasses our vision for adapting it to calypso music. In the coming months, we seek to adapt the model to create a “calypsofier”. Finally, Elio finished the UK presentations by discussing a nice musicological use case of detecting metric modulations in a collection. This work will continue in the coming months.
The French partners (Aurelie Helmlinger, Florabelle Spielmann, Joséphine Simonnot, Guillaume Pellerin, Thomas Fillon) then presented an update of their work. Aurelie discussed two pieces, and how they play with the Calypso style. The first was a “bomb tune” that features a portion of “Nessum Dorma” from Puccini’s Turandot. It is a really fantastic example of how steelbands borrow and adapt for impressing judges at annual competitions like Panorama.
The second piece discussed by Aurelie is named “Calypso D Lite”. We heard the piece performed in two different versions: 1) by a steelband; 2) by a Japanese group. The differences between them were startling, with the Japanese version sounding much more like Bossa Nova. Aurelie presented some discussion of a calypso player of whether or not the Japanese version was calypso. One criteria to emerge was the presence of the “yoo kee tee” rhythm. Another was the sound texture.
Florabelle presented some insightful interviews with a Trinidadian drummer who works with the steelband BP Renegades. Together they identified prototypical rhythms of early calypso, modern calypso, and soca. It was interesting to see what aspects of the drumming patterns remain constant, and what are left for individual expression. Another important observation is that sound texture is very important to the discussion of whether something is calypso or not. Metal is essential aspects of the style: hi-hats, irons, and steel pans.
The French presentations ended with Guillaume and Thomas talking about the TimeSide player, and annotation tools in Telemeta. One deliverable of DaCaRyH will be new and improved plugins for such an annotation platform.
After a lovely lunch along Regent’s Canal, Professorial Research Fellow Tim Crawford from Goldsmiths, University of London gave an invited talk about his many years of research in music and computation. He provided deep discussions about the role and importance of context, and how a lot of work has yet to be done to integrate context with content. One poignant example Tim presented was of Rostropovich playing Dvorak’s cello concerto at the 1968 Proms in London:
One youtube comment says: “This is, in my opinion, one of the most important musical performances of the second half of the century, if not the whole of the century.” Any music listening system will surely miss why because it is extrinsic to the recording. Questions of “music similarity” and “music recommendation” are irrelevant here. (Imagine: based on your “thumbs up” of this 1968 performance of Rostropovich, we recommend you listen to this performance of Beethoven.) This powerful example reminds us how music is a record of many, many more things than notes, rests and instruments.
We continued our day’s discussions in a proper English fashion with real ales at a local pub. We followed this up by attending a performance of the wonderful Magnetic Resonator Piano. And then concluded with excellent curry at the highly popular Tayyabs restaurant.
We have lots of exciting work ahead, including an article surveying computational musicology for the French journal Cahiers d’Ethnomusicologie.
The next meeting of DaCaRyH will be in Paris early next year.
Here is an interesting event at which I will be speaking!
Here is a link to the first volume of session tunes created by folk-rnn. (It is 3,000 tunes, about 35 megabytes, so don’t download on your phone!) The second page describes a bit about what we are interested in. Please contact me if you want to contribute! We would like to organise at some future time in London a session featuring tunes composed by and/or with a computer.
A fun event with colleagues speaking with people from diverse backgrounds!
Other than the over-sensational title, this article provides a very nice summary of fundamental problems facing some applications of machine learning. Machines learn the darndest things!