HORSE2016 success!

September 19, 2016 saw the successful premier edition of HORSE2016: On “Horses” and “Potemkin Villages” in Applied Machine Learning. I have now uploaded videos to the HORSE2016 YouTube channel, and posted slides to the HORSE2016 webpage. I embed the videos below with some commentary.

HORSE2016 had 10 speakers expound on a variety of interesting topics, and about 60 people in the audience. I am extremely pleased that the audience included several people from outside academia, including industry, government employees and artists. This shows how many have recognised the extent to which machine learning and artificial intelligence are impacting our daily lives. The issues explored at HORSE2016 are essential to ensuring this impact remains beneficial and not detrimental.

Here is my introductory presentation, “On Horse Taxonomy and Taxidermy”. This talk is all about “horses” in applied machine learning: what are they? Why is this important and relevant today? Why the metaphor, and why is it appropriate? I present an example “horse,” uncovered using an intervention experiment and a generation experiment. Finally, I discuss what a researcher should do if someone demonstrates their system is a “horse”.

Roisin Loughran presents, “When the means justifies the end: Why we must evaluate on more than mere output”. This talk is about systems that compose music, what it means to say they are “musically intelligent”, and the difficulties inherent to evaluating them. She presents some examples featuring the application of evolutionary algorithms to composing melodies.

Mathieu Lagrange presents, “Computational experiments in Science: Horse wrangling in the digital age”. This talk is about “horses” in audio content analysis, with specific examples in source separation using non-negative matrix factorisation, and acoustic scene analysis. Lagrange talks more widely about ways to improve research practices. He discusses the web-based tool SimScene for creating simulated acoustic scenes for testing machine listening systems. He also presents expLanes, a tool for facilitating and automating better practices in research, including reproducibility.

Tim Hospedales presents, “Gated Neural Networks for Option Pricing: Enforcing Sanity in a Black Box Model: Enforcing sanity in a black box model.” This talk is about adapting the architecture and training of a neural network to satisfy six sanity conditions in the learnt model. Sanity is crucial here because they want to explain the market and not just predict it. It is wrong to believe that sanity will come about just by training on large amounts of real data. Domain knowledge is essential. In the end, one has a model that makes good predictions with guarantees of sanity and generalisation.

Prof Geraint A. Wiggins presents the HORSE2016 keynote, “Trying to be Accurate; or, On the Prevention of Horses”. Prof Wiggins talks broadly about descriptive models and explanatory models, their use and role in science and engineering, and their application to language and music.

Francisco Rodríguez-Algarra presents, “You don’t hear a thing… but my Horse knows it’s Rock!” This talk is about explaining the behaviour of a machine listening system through system analysis and intervention experiments. Rodríguez demonstrates this for a specific machine music listening system that appears to be highly successful for labeling music according to “genre”. He shows how the system is actually a “horse”, i.e., appearing to solve a problem but actually is not.

Jeff Clune presents, “How much do deep neural networks understand about the images they recognize?” This talk is about the success of deep neural networks for image content analysis, and answering the question of how they are working. Clune suggests this to be the dawn of the field of “AI Neuroscience”, and draws a parallel between how neuroscientists and his team are working to uncover what neurons (real or digital) are responding to. He presents a series of four episodes of “deep visualisation”, each uncovering more and more evidence of that deep neural networks trained on natural images are understanding high level image concepts and contexts, such as swimming trunks, bell peppers and hamburgers. (The high resolution slides can be downloaded here.)

Ricardo Silva presents, “The role of causal inference in machine learning”. This talk introduces causal inference, and discusses how an observational study can be recast as an experimental one through which one can infer causal effects. This involves an assumption about the intervention, and tricks using graphical models. One trick is the “backdoor adjustment” whereby conditioning on a common cause allows one to estimate the effect of the treatment. Another trick is using “instrumental variables”, which allows one to infer bounds on the causal effects via a variable that is known to affect only the treatment. Silva presents several examples highlighting these tricks, and showing how causal inference can be used to overcome barriers to implementing randomised controlled trials.

Ian Goodfellow presents, “Adversarial Examples and Adversarial Training”. This talk discusses adversarial examples, with specific instances in image content analysis. Goodfellow explains how they arise from overfitting or excessive linearity in a model. He presents several examples drawn from training on different image datasets. He shows evidence that training models with adversarial examples in the loop can increase the robustness of the learnt model to adversarial attacks of the same kind. Finally, Goodfellow presents his new library cleverhans. (Unfortunately, the video capture system cut off the last two minutes of this presentation.)

HORSE2016 was funded with support from the EPSRC through the Platform Grant on Digital Music (EP/K009559/1), and co-organised with the QMUL Applied Machine Learning Lab and Machine Listening Lab.

2 thoughts on “HORSE2016 success!

  1. Pingback: Sunday Morning Videos: HORSE2016, On “Horses” and “Potemkin Villages” in Applied Machine Learning | A bunch of data

  2. Pingback: Videostream: 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning | A bunch of data

Leave a comment