Music genre recognition: On the way to answers

I have started to poke some of the questions of a few months ago about music genre recognition. In particular, I want to see how the faults in this well-used dataset affect the results of classification. One might assume that these faults will only reduce the performance of algorithms; however, the faults of the dataset (1000 excerpts) are so varied that I cannot be sure about this until I do further testing.
For instance, with so many exact replicas (54), it is possible that in cross-validation the same features are in the training and test sets, which will of course increase the mean performance in particular folds. There are also many excerpts from the same artist and/or album (e.g., 28 from Bob Marley, 24 from Britney Spears), from the same recording (12), and versions (12). Thus, the producer effect and artist effect will inflate performance. With all the mislabelings though (118), accuracy could be hurt. And in the cases where the training set has multiple copies of the same features, we can consider the training data to not be as rich as thought, which will decrease performance. All in all, the sum total of the good and bad effects of the faults may cancel each other; and it appears that the results of classifiers run on the faulty dataset are not too different from those obtained using other music genre datasets (which might have similar problems, but I am not sure). So this is an interesting question.

Secondly, I remain to be convinced for the many algorithms proposed to recognize genre, that it is actually the music (rhythm and instruments for instance), and not extramusical features (such as compression), that are driving the recognition. In other words, I doubt the simplest problems have even been solved.


So, to investigate these questions, I am using the best performing method I have found. This approach is well-described in:

  • J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. Kégl, “Aggregate features and adaboost for music classification,” Machine Learning, vol. 65, pp. 473-484, June 2006.
  • J. Bergstra, “Algorithms for classifying recorded music by genre,” Master’s thesis, Université de Montréal, Montréal, Canada, Aug. 2006.

Since so few details are left out of the description of this work, I only needed to spend a few hours to program the system, and find results validating those reported by Bergstra et al. (There are a few assumptions I had to make, but nothing that apparently kills the system.)

Their approach involves first finding features for 46.4 ms FRAMES (1024 samples).
These features include 40 MFCCs, zero crossing rates, spectral mean, variance, and 16 quantiles, and a 32-order linear prediction error. Then I find the mean and variance of each dimension for SEGMENTS of 129 frames, giving features of 120 dimensions for each segment. No segments overlap, and so for each 30 s sound excerpt we have about 10 labeled feature vectors.
Separating the songs with 5-fold stratified cross validation, I train a set of decision stumps with AdaBoost.MH, a multiclass extension of AdaBoost. (A stump is a decision tree with two leaves.) After about 1000 training iterations, I test the classifier, and find the mode of every 10 classified segments to determine the genre of the excerpt. This process is made extremely easy using Multiboost, and a short MATLAB to ARFF script I had to modify.

Below we see the confusion matrix resulting from this method, which has an accuracy in this case of about 73.4%. This is using only decision stumps, and 1000 training iterations, but the results are in the neighborhood of the 83% accuracy reported by Bergstra et al. using a larger tree and 2500 training iterations.
cmatrixAdaboost.png

Now, knowing the faults in this dataset, it is not worthwhile to wax poetic on how some of these numbers make sense, e.g., “Rock is close to Country, Disco, and Metal, so those confusions are not unexpected,” or “Hiphop and Reggae share the same roots, and we can see that here.” Instead, armed with this classification method, it is time to run this process many times and then see which excerpts are the repeat offenders, and how the variety of faults of this dataset benefit or hurt the results. I wonder too about the resilience of the AdaBoost.MH approach. I will definitely have to consider that in my analysis by comparing the results using another classification approach.

Advertisements

2 thoughts on “Music genre recognition: On the way to answers

  1. Thanks Alex. Here is similar MATLAB code:
    % create a new figure and store handle
    screenposx = 100; % in units of pixels
    screenposy = 100;
    screenwidth = 600;
    screenheight = 500;
    % The normalized units setting means we do not need to worry about pixels
    % or inches when creating an axis
    handle_figure = figure(‘Position’,[screenposx screenposy screenwidth …
    screenheight], ‘Units’,’Normalized’);
    % set the figure printing properties.
    % NOTE: if printing to EPS, only the last 2 values matter as they define
    % the figure size
    leftmargin = 0.1; % these are in inches
    bottommargin = 0.1;
    figurewidth = 8;
    figureheight = 5;
    set(handle_figure,’PaperOrientation’,’portrait’, …
    ‘PaperPosition’,[leftmargin bottommargin figurewidth figureheight]);
    % create axes and store handle
    axeswidth = 0.86; % normalized units; 1 spans entire width
    axesheight = 0.8;
    axesmarginleft = 0.1;
    axesmarginbottom = 0.13;
    % always make the fontsize larger than the extremely small MATLAB default
    handle_axes = axes(‘FontSize’,14,’FontName’,’Helvetica’, …
    ‘position’, [axesmarginleft axesmarginbottom axeswidth axesheight]);
    doExportPlot = true;
    hold on;
    set(gcf, ‘Renderer’,’opengl’);
    imagesc([1:numclasses],[1:numclasses],meanconfusions);
    axis ij;
    colormap(flipud(gray));
    textStrings = num2str(meanconfusions(:)*100,’%0.2f’); % Create strings from the matrix values
    textStrings = strtrim(cellstr(textStrings)); % Remove any space padding
    [x,y] = meshgrid(1:numclasses); % Create x and y coordinates for the strings
    hStrings = text(x(:),y(:)-0.12,textStrings(:),… % Plot the strings
    ‘HorizontalAlignment’,’center’);
    midValue = mean(get(gca,’CLim’)); % Get the middle value of the color range
    textColors = repmat(meanconfusions(:) > midValue,1,3); % Choose white or black for the
    % text color of the strings so
    % they can be easily seen over
    % the background color
    set(hStrings,{‘Color’},num2cell(textColors,2),’FontWeight’,’bold’, …
    ‘FontSize’,18); % Change the text colors
    set(gca,’XTick’,[1:numclasses],’XTickLabel’,”,’YTick’,[1:numclasses],’YTickLabel’,”);
    th = text([1:numclasses]-0.2, repmat(0.39,numclasses,1),classes, …
    ‘HorizontalAlignment’,’left’,’rotation’,45,’FontSize’,18, …
    ‘FontName’,’Helvetica’,’FontWeight’,’bold’);
    th = text(repmat(0.48,numclasses,1),[1:numclasses], …
    classes, …
    ‘HorizontalAlignment’,’right’,’FontSize’,18, …
    ‘FontName’, ‘Helvetica’,’FontWeight’,’normal’);
    set(gca,’YAxisLocation’,’right’);
    xlabel(‘True Labels ‘,’FontAngle’,’Oblique’,’FontSize’,18);
    ylabel(‘Predicted Labels ‘,’FontAngle’,’Oblique’,’FontSize’,18);
    axis([0.5 10.5 0.5 10.5]);
    print(gcf,’-deps’,[‘confusion_excerpts_stump.eps’]);

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s