*The first part of this material should be accessible to a fourth-grader, the latter part to a middle-schooler. The initial Mental Math trick can be taught without algebra, even though in order to describe the method on paper we found it useful to introduce some notation.*

For us a *distribution*, or simply the *data*, is a finite string of numbers, such as

These numbers could represent the weights in kilograms of five kids, or the monetary amounts in the pockets of five friends. The specific context only comes into play when interpreting the statistics. *Statistics* are computations that reveal some information about the data, but also “forget” about most of the distinguishing features of a given distribution.

For instance, the *mean*, or *average* of the distribution is computed by adding up all the data into a sum

and then dividing it up equally among the number of data.

Some algebra helps clarify the phrasing here. The first piece of data is usually represented by the letter , the second by , etc…and the last one by . This makes it clear that we are dealing with numbers. For instance, if we are dealing with the numbers then we would think of . The number of data here is equal to and the sum is….something. The average is usually written as and in this case it would be computed by dividing by .

The task of adding up all the data seems daunting at first, but here is a trick that allows you to do the calculation “mentally”.

- First
**make a guess**. If you haven’t learned about negative numbers, then always make your guess to be the smallest datum. In our example, your guess would then be . The notation for the guess is . - Now
**subtract off**your initial guess from each of the data and get a new set of data that will be easier to manage. In our example with guess , the new distribution is . We give a name to these new data, we will call them the*errors*and write . - Now we
**average the errors**. In our example it is much easier now to add up the errors and get . Even though is zero it still counts as a piece of data, so we still must divide by . Hence we get that the errors’ average to , or . - Finally,
**add the averaged errors to your initial guess**, and voila’! For us:

You can practice this strategy on a few more examples and then you will be able to impress friends and family with amazing mental skills. Try and ask your mom to give you four numbers between and . She’ll say something like: . Give her a calculator and ask her to average these numbers (she should be able to do that) but tell her not to tell you the answer. Now guess a number roughly in between, say . Subtract it off and get . These add up to . What luck! . So the average is !

Why does this trick work? Why does it work no matter what your initial guess is? The best way to explain this is using some algebra. Luckily we’ve already set up all the necessary notation. Suppose you want to average data . Make a guess . Subtract it off and get a new data set of errors . Now average these errors:

Getting rid of the parentheses and rearranging, we find that

So when we add to our initial guess , we see that cancels out:

Is it possible to guess the average right the first time? Yes of course. In that case and when you go and average the errors you find that . In fact this property characterizes the mean:

is the only number that makes all the (signed) errors add up to .

This property of explains the physical intuition that is often given for the mean. Think of as the places along the number line where unit weights are lying on a thin tray. Then try to place a wedge (‘fulcrum’) under the tray pointing at some point with coordinate . The tray will balance only when all the signed distances to add up to zero, namely when . Otherwise, the tray will crash to the floor.

Let’s go back to the interpretation of the mean in specific examples. When represent amounts of money, then the mean ( dollars and cents) represents what everyone would end up with if we tried to redistribute the money, ”level the playing field”, in such a way that everyone has the same amount. That amount is the mean. Clearly this interpretation fails if were representing heights instead. In no way could we redistribute heights.

So in applications the interpretation of the mean must vary from context to context, and some times the information that is lost from the data when computing the mean might overshadow whatever ”statistic” is obtained. Unfortunately, in politics and in the social sciences, too often, the error is made of speaking as if the mean is representing everything one would want to know about a specific data set.

Of course statisticians have a partial answer to this problem. If it’s true that two quite different data sets may share the same average (thus losing all the information that distinguishes the two data sets), we can come up with a way of measuring how ”dispersed” a data set may be around its average. This is a ”second order” analysis. Let’s consider again our friends, the errors . We know that they add up to zero, but if we remove the signs and consider instead , i.e., the distances of each piece of data from the average, how do they behave? What is their average? In words that would be ”the average distance from the average”. There is a term for this quantity, it’s called the *mean deviation*:

Mathematicians, it turns out, are not satified with this. Instead of just removing the sign of the errors , we’d rather do it simply by ”squaring” the errors. So instead of the mean deviation, we prefer to compute the *variance*:

and then, to make amends, we take the square root of the variance and call that the *standard deviation*.

In words, the standard deviation is ”the square root of the average square-distance from the average”. Why on earth would one want to square errors? There are many deep reasons for this and appealing to a vague resemblance to the Pythagorean Theorem would go a long way in explaining this. Instead let me give you an idea of why the variance is better, by doing a simple calculation.

What happens if we make an initial guess which turns out not to be the right guess: , and then we happily go ahead and start averaging the square distances to instead? What can we say about the number we would end up computing in relation to the variance? It turns out, that no matter what our initial guess is, we would always get something larger than the variance. In other words, we get another characterization of the mean:

is the unique value of that minimizes the sum of the square errors .

To see this, let’s focus on one term of the sum at a time, say the first one. We want to compare to . Let’s take the difference! Then we can maybe use the remarkable identity

This can be checked simply by unfolding the right hand-side.

We get

The same exact computation holds with replaced by any other . So adding these identities up and factoring out the common term , we get

where I used the fact that .

What this shows is that if we go ahead and compute the average square error having made a guess , we always get a larger quantity than the variance and in fact we overshoot exactly by . The magic of squares!

This is an interesting one, thanks 🙂