Those of you who already have tested for the variance of data from a normal distribution may have asked themselves how the link between normal variance and chi-squared distribution arises. Trust me: The story, which I will tell you, is an exciting one!

First, think about what happens when you sample. Each of the values that you receive are randomly distributed. That is, an estimate for the variance S^{2} of the data you are sampling from, can be caclulated applying the following formula to the n  randomly sampled values X_{i}.

\displaystyle S^{2}=\frac{1}{n-1}\sum_{i=1}^{n}\Big(X_{i}-\overline{X}\Big)^{2}

Assuming that the data are normally distributed, we can z-transform the data. \overline{X} is already an estimate for the population mean. The population variance \sigma^{2} must be assumed to be known. The resultings data values Z_{i} are then standard normally distributed.

\displaystyle Z_{i}=\frac{X_{i}-\overline{X}}{\sigma^{2}}

Substituting the above expression into the formula for the estimated variance S^{2} gives

\displaystyle S^{2}=\frac{1}{n-1} \sum_{i=1}^{n}\Big(\sigma^{2} Z_{i}\Big)^{2}

Taking \sigma^{2} out of the sum, gives a sum of squared standard normally distributed random variables. This, in turn, is the definition of the chi-squared distribution! As we performed the z-transformation based on an estimate for the population mean, namely \overline{X}, we should reduce the degrees of freedom of the chi-squared distribution to n-1  instead of n. Hence,

\displaystyle \frac{(n-1)S^{2}}{\sigma^{2}}= \sum_{i=1}^{n} Z_{i}^{2}

\displaystyle \frac{(n-1)S^{2}}{\sigma^{2}} \sim \chi^{2}(n-1)

I really liked the way of illustrating the relationship. What do you think?