As my grant is now winding down, I am in the final push to push out a few more articles, and go on an evangelism spree.
Watch for when I am in your town!
- Wednesday Oct. 30, 18h30, OFAI, Vienna
- Wednesday Nov. 13, 15h30, MTG, Barcelona
- Monday Nov. 25, 14h00, Télécom ParisTech, Paris
- Tuesday Nov. 26, all day, AAU, Copenhagen
- Wednesday Dec. 4, 11h00 Fraunhofer IAIS, Bonn
My current show is called, “The crisis of evaluation in MIR”.
Abstract: I critically address the “crisis of evaluation” in music information retrieval (MIR), with particular emphasis paid to music genre recognition, music mood recognition, and autotagging. I demonstrate four things: 1) many published results unknowingly use datasets with faults that render them meaningless; 2) state-of-the-art (“high classification accuracy”) systems are fooled by irrelevant factors; 3) most published results are based upon an invalid evaluation design; and 4) a lot of work has unknowingly built, tuned, tested, compared and advertised “horses” instead of solutions. (The example of the horse Clever Hans provides an appropriate illustration.) I argue these problems occur because: 1) many researchers assume a dataset is a good dataset because many others use it; 2) many researchers assume evaluation that is standard in machine learning or information retrieval are useful and relevant for MIR; 3) many researchers mistake systematic, rigorous, and standardized evaluation for being scientific evaluation; and 4) problems and success criteria remain ill-defined, and thus evaluation poor, because researchers do not define appropriate use cases. I show how this “crisis of evaluation” can be addressed by formalizing evaluation in MIR to make clear its aims, parts, design, execution, interpretation, and assumptions. I also present several alternative evaluation approaches that can separate horses from solutions.