Making sense of the folk-rnn v2 model, part 11

This is part 11 of my loose and varied analyses of the folk-rnn v2 model, which have included parts 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10! Findings from these have appeared in my MuMu 2018 paper, “What do these 5,599,881 parameters mean? An analysis of a specific LSTM music transcription model, starting with the 70,281 parameters of the softmax layer”, and my presentation at the Joint Workshop on Machine Learning for Music ICML 2018: “How Stuff Works: LSTM Model of Folk Music Transcriptions” (slides here). There’s still a lot that can be done here to pick apart and understand what is going on in the model. It’s a fascinating exercise, trying to interpret this complex model. In this part, I start looking at beam search as an alternative approach to sampling from the model.

Let’s briefly return to what folkrnn v2 really is: a model of a joint probability mass distribution P_{X_1,X_2, \ldots}(x_1,x_2, \ldots), where X_t: \mathcal{V} \to [1,137] is a random variable at step t \in [1, 2, \ldots], and the vocabulary \mathcal{V} is a set of ABC-like tokens. This function can be equivalently written by the chain rule:

P_{X_1,X_2, \ldots}(x_1,x_2, \ldots) = P_{X_1}(x_1)P_{X_2|X_1}(x_2|x_1) P_{X_3|X_1,X_2}(x_3|x_1, x_2) P_{X_4|X_1,X_2,X_3}(x_4|x_1, x_2, x_3)\ldots

Training a folkrnn model involves adjusting its parameters to maximize each one of these conditional probabilities for real sequences sampled from a dataset. When generating new sequences, we merely sample from each estimate of P_{X_t|X_1,X_2,\ldots, X_{t-1}}(x_t|x_1, x_2, \ldots, x_{t-1}), until we have sampled the stop token. This produces, for instance, the following transcription:

Another random sampling will give a different sequence. This is how folkrnn.org is implemented, one token sampled at a time from a dynamic posterior distribution over the vocabulary.

We can factor the joint probability distribution another way, however:

P_{X_1,X_2, \ldots}(x_1,x_2, \ldots) = P_{X_1,X_2}(x_1,x_2) P_{X_3,X_4|X_1,X_2}(x_3,x_4|x_1, x_2) P_{X_5,X_6|X_1,X_2,X_3,X_4}(x_5,x_6|x_1, x_2, x_3,x_4)\ldots

These are distributions of pairs of random variables. While folkrnn v2 was not explicitly trained to maximize these for real sequences from the training data, I believe that the use of teacher forcing means it was equivalently trained to maximize these joint probabilities. This factorization also shows a different way to generate sequences using folkrnn v2, and one that can be argued to consider more context in generation:

  1. compute P_{X_1}(x_1)
  2. compute P_{X_2|X_1}(x_2|x_1) for each x_1 \in \mathcal{V}
  3. compute P_{X_1,X_2}(x_1,x_2) = P_{X_2|X_1}(x_2|x_1)P_{X_1}(x_1) for each (x_1,x_2) \in \mathcal{V}\times\mathcal{V}
  4. sample a pair of tokens from this distribution
  5. update the states in the model with the sampled tokens, and repeat the above until one of the pairs of sampled tokens is the stop token.

In this situation, we are computing probability distributions over the sample space \mathcal{V} \times \mathcal{V}. For a vocabulary of 137 tokens, this has 18,769 outcomes. We don’t need to stop at pairs of tokens, because we can also factor the joint probability distribution using tuples of size 3, which leads to a sample space 2,571,353 outcomes. Pretty quickly however our sample space grows larger than trillions of outcomes … so there’s a limit here given the time I have left to live. We can however bring things under control by approximating these joint probability distributions.

Let’s think of this computational procedure as building a tree. For the general case of n-tuples of tokens, we create the first |\mathcal{V}| branches by computing P_{X_1}(x_1). From each of those branches we create |\mathcal{V}| more branches by computing P_{X_2|X_1}(x_2|x_1). We continue in this way to a depth of n and create all the leaves of the tree by computing the product of all values on the branches above it:

P_{X_1}(x_1)P_{X_2|X_1}(x_2|x_1) \ldots P_{X_n|X_1,X_2,\ldots, X_{n-1}}(x_n|x_1, x_2, \ldots,x_{n-1}) = P_{X_1,X_2, \ldots, X_n}(x_1,x_2, \ldots, x_n)

Thinking of the procedure in this way motivates a question: how well can we estimate P_{X_1,X_2, \ldots, X_n}(x_1,x_2, \ldots, x_n) by searching fewer than |\mathcal{V}|^n leaves? We know from the expression above that we are multiplying conditional probabilities, so if at some depth one of those is zero or very small compared to others, we might as well trim that branch then and there. Another possibility is more strict: search only the \beta branches at each depth that have the greatest probability. In this case, computing the probability distribution for tuples of size 3 requires computing \beta^3 leaves instead of |\mathcal{V}|^3 = 2,571,353. Then after having computed those leaves, we ignore all others and scale the joint probability mass distribution to sum to unity. That strategy is known as beam search. Using a beam width of \beta = \infty computes all of the leaves at any depth. Making the width smaller saves computation, but at the price of a poorer estimate of the joint probability distribution. Let’s see how sampling from folkrnn v2 in this way performs.

The following transcription was generated using tuples of size n=2, and beam width of \beta = \infty (meaning we compute the probability distribution over the entire sample space \mathcal{V}\times\mathcal{V}):Screen Shot 2019-07-17 at 09.10.05.pngThough we use the same random initialization as “Why are you …” above, this produces a dramatically different transcription. The first pair of tokens selected are meter and mode, which are both different from before. Repeat bar lines are missing at the end of the two parts, but as a whole the tune holds together with repetition and variation of simple ideas, and cadences at the right places. Here I play it at a slow pace on the Mean Green Machine Folk Machine, imposing an AABB structure:

Here’s another transcription produced with a different random seed initialization.

Screen Shot 2019-07-17 at 09.17.37.png

There are no obvious mistakes and it doesn’t appear to be copied from the training data. And it’s a nice tune, playing well as a hornpipe. Here it is on Mean Green Machine Folk Machine:

Here’s a jig created by seeding the network with a 6/8 meter token:

Screen Shot 2019-07-17 at 09.17.48.png

Another good tune that appears to be novel. Here it is on Mean Green Machine Folk Machine:

In the next part we will look at making \beta < \infty.

 

 

 

4 thoughts on “Making sense of the folk-rnn v2 model, part 11

  1. Pingback: Making sense of the folk-rnn v2 model, part 12 | High Noon GMT

  2. Pingback: Week 1: The Boys of Ballinaburre (folk-rnn, v2 w/ beamsearch n=2) – Tunes from the Ai Frontiers

  3. Pingback: Week 4: The Liddle Shepe (folk-rnn, v2 w/ beamsearch n=2) – Tunes from the Ai Frontiers

  4. Pingback: Week 5: Mickey Fitternaly’s (folk-rnn, v2 w/ beamsearch n=2) – Tunes from the Ai Frontiers

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s