HORSE2017 – On “Horses” in Applied Machine Learning | Centre for Digital Music (C4DM) | Queen Mary, University of London

Source: HORSE2017 – On “Horses” in Applied Machine Learning | Centre for Digital Music (C4DM) | Queen Mary, University of London

Call for contributions!

Have you uncovered a “horse” in your domain?

“As an intentional nod to Clever Hans, a ‘horse’ is just a system that is not actually addressing the problem it appears to be solving.” (B. L. Sturm, “A simple method to determine if a music information retrieval system is a ‘horse’,” IEEE Trans. Multimedia 16(6):1636–1644, 2014.)

We invite presentations exploring issues surrounding “horses” in applied machine learning. One of the most famous “horses” is the “tank detector” of early neural networks research ( after great puzzlement over its success, the system was found to just be detecting sky conditions, which happened to be confounded with the ground truth. Humans can be “horses” as well, e.g., magicians and psychics. In contrast, machine learning does not deceive on purpose, but only makes do with what little information it is fed about a problem domain. The onus is thus on a researcher to demonstrate the sanity of the resulting model; but too often it seems evaluation of applied machine learning ends with a report of the number of correct answers produced by a system, and not with uncovering how a system is producing right or wrong answers in the first place.

We are looking for contributions to the day in the form of 20-minute talk/discussions about all things “horse”. We seek presentations from both academia and industry. Some of the deeper questions we hope to explore during the day are:

  • How can one know what their machine learnings have learned?
  • How does one know if and when the internal model of a system are “sane”, or “sane enough”?
  • Is a “general” model a “sane” model?
  • When is a “horse” just overfitting? When is it not?
  • When is it important to avoid “horses”? When is it not important?
  • How can one detect a “horse” before sending it out into the real world?
  • How can one make machine learning robust to “horses”?
  • Are “horses” more harmful to academia or to industry?
  • Is the pressure to publish fundamentally at odds with detecting “horses”?

Please submit your proposals (one page max, or two pages if you have nice figures) by July 15, 2017 to Notification will be made July 25, 2017. Registration will then be opened soon after.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s