Part 1 is here, and part 2 is here. Having defined our measurement model, it is time we estimate its parameters.

Given measurements of each wine in , e.g., the collection of scores in Table 1, we wish to estimate for each wine and for all wines. Equivalently, we wish to estimate . By the method of least squares, we find from minimising the sum of deviation squares:

Differentiating the sum of residual squares with respect to , we find the remaining parameters:

Then our best approximation to is , and the residual . The expectations of these estimators are:

and their variances are easily seen to be:

We see that increasing decreases the variance of each of these estimators. If for all wines and scores, has zero mean then these estimators are unbiased. If for all wines and scores is iid with variance , then the above become:

This clearly shows how the uncertainty in our parameter estimates depends on both the number of wines and the number of scores for each wine.

The figure above shows a simulation. We randomly draw four independent scores from each of four Gaussian distributions with means , and variance . We then estimate the parameters. We simulate the above experiment 1,000,000 times and construct distributions to investigate the behaviour of the parameter estimates. The figure below shows these for each parameter.

While we see the variances of the deviation parameters are larger than that of the mean parameter, two observations are contrary to our predictions above. First, there does appear to be bias in the estimates even though the noise is zero mean. The bias we observe in the parameters is toward the nearest integer of the true parameter. Second, the variances we observe are larger than what we predict. The cause of these two differences is that our model of the measurements is not quite accurate. When we do not restrict the scores to being integers, but any number, then these differences are greatly diminished. A more accurate model of our measurements instead accounts for all scores being in fact integers:

However, this does not lend itself easily to the analysis above. Nonetheless, our simulations show our estimates are reasonably well-behaved, but that we may need to account for this discrepancy when we draw inferences.

In part 4 we will look at the use of null hypothesis significance testing to determine if there is a significant difference between the wine parameters in Table 1. Then, part 5 will reveal a fatal flaw.

### Like this:

Like Loading...

*Related*

Pingback: The centrality of experimental design, part 4 | High Noon GMT

Pingback: The centrality of experimental design, part 5 | High Noon GMT