AcousticBrainz aims to automatically analyze the world’s music, in partnership with MusicBrainz and “provide music technology researchers and open source hackers with a massive database of information about music.” This effort is crowd sourced neatness, which means people from all over the world are contributing data by having their computer crunch through their MusicBrainz-IDed music libraries and automatically uploading all the low-level features it extracts.
I construct today’s review from low- and high-level data recently extracted from a particular music track AcousticBrainz. Can you guess what it is? What characteristics it has? (“Probabilities” are in parentheses.) The answer will be revealed below tomorrow.
This male-gendered voice (0.99) track is acoustic (0.92) and has a bright timbre (0.93). It is in D# major, and probably atonal (0.9). It is not danceable (0.97), maybe because it is ambient (0.65) folk-country (0.49) jazz (0.4) hip hop (0.39). It is not aggressive (0.95), not party (0.85), but sad (0.77) and quite likely relaxed (0.97). The track’s “original date” (from MusicBrainz) is 1998.