Daily review constructed from an AcousticBrainz track analysis: Nov. 26 2014

AcousticBrainz aims to automatically analyze the world’s music, in partnership with MusicBrainz and “provide music technology researchers and open source hackers with a massive database of information about music.” This effort is crowd sourced neatness, which means people from all over the world are contributing data by having their computer crunch through their MusicBrainz-IDed music libraries and automatically uploading all the low-level features it extracts.

I construct today’s review from low- and high-level data recently extracted from a particular music track AcousticBrainz. Can you guess what it is? What characteristics it has? (“Probabilities” are in parentheses.) The answer will be revealed below tomorrow.

This instrumental (0.91) but not acoustic (0.73) track in the key of F major is certainly atonal (1.0), and likely male-gendered (0.89). It is most definitely not danceable (1.0), but also has a Tango rhythm (0.995). Its mood is electronic (0.92) party (0.64), and is definitely happy (1.0) but maybe relaxed (0.81) and might be sad (0.6). It is probably ambient (0.89) and/or blues (0.80), but could also be classical (0.45) and/or hip hop (0.38).

“The Love Scene” from Dracula (the movie) by John Williams

Advertisements

4 thoughts on “Daily review constructed from an AcousticBrainz track analysis: Nov. 26 2014

  1. This “atonal” descriptor falls into sharp critique. While one hears this extract as certainly quite chromatic, the tonality is clear. One would think that some partial transcription coupled with state-of-the-art symbolic analysis could detect it, but I guess it is not part of the approach. At least the descriptor could be renamed “chromaticism” or something. But even that would fail to be descriptive if the test material would be very simple tonality (pentatonic) with a single tritone key change.
    Perhaps all of those descriptors are quite useful on some bulk of the large dataset. Maybe it’s somewhere in the margins that they fail.
    The Tango rhythm is not that far off, I think I can hear a slow habanera somewhere in the foundation.

    Like

  2. Apparently, these high-level descriptors come from the “GaiaTransform”: http://essentia.upf.edu/documentation/algorithms_overview.html. I cannot find anything else, except a mention that (http://acousticbrainz.org/data) “The high-level data is less stable that the low-level data, since it contains more experimental algorithms. We anticipate re-calculating the high level data until it becomes fully stable.” I have no idea how mood and genre can ever be claimed to be “stable.” (Maybe, just maybe, with respect to horses. :)
    I don’t know for what part of this ever growing dataset these high-level features are useful, especially until we see in what they find use. At least they are finding use in my “daily reviews.”
    Whether or not the “Tango rhythm” is or is not “far off” (for the whole excerpt? any part of it?), I would like to see some experts try to dance a Tango to it. :) It is dangerous to issue apologia for algorithms when I don’t know how the criteria by which they are making decisions.
    Anyhow, I look forward to meeting several members of the MusicBrainz and AcousticBrainz team next month. I think it could have some very good outcomes. :)

    Like

  3. Just curious, how are you selecting tracks to feature?
    a) random from AcousticBrainz tracks, no selection
    b) from your library, that have some weird “genre” characteristics
    c) selecting AB examples with doubtful descriptions

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s