Update! I finally passed all the CFA exams and wrote an eBook about the program. If you're interested, click here.

Statistical Significance and the Magic Formula

by Lumilog on January 26, 2009

Note: Please read the disclaimer. The author is not providing professional investing advice or recommendations.

It’s been a while since I first read Joel Greenblatt’s The Little Book That Beats the Market, and was bowled over by his Magic Formula‘s historical returns versus the S&P 500 …


But one thing that bothered me when reading Greenblatt’s book was my memory of the Foolish Four. It was a similar market-trouncing “magic formula” that gained popularity in the late 90′s, only to later be discredited and deemed an artifact of data-mining.

My understanding of how the wind was taken out of the Foolish Four’s sails was by investigating its performance over a longer time period. For example, it’s yearly excess return over the Dow goes from 10% to something closer to 2% when back-tested over 50 years instead of the original 20 years. So of course that becomes an undeniable hintergedanke when seeing the Magic Formula’s measly 17-year sample size. :(

And for those hung up on the fact that even a 2% alpha is respectable for the Foolish Four, it shrinks even further when you factor in the capital gains incurred by having to shuffle the deck, as it were, each year.

Learning quantitative techniques to analyze problems such as these (i.e. is 17 years worth of data enough to conclude that the Magic Formula outperforms the S&P 500?) is exactly why I enrolled in the CFA program. And after getting a couple hundred pages into the first study guide at Level One, they cover precisely this sort of conundrum.

So here is how it works. We do what is called a hypothesis test in statistics, whereby we hypothesize that the average return of the Magic Formula and the S&P 500 are actually identical! And then we do a few computations based on confidence intervals to see whether we can reject that hypothesis or not, and with what degree of confidence. As a big fan of the Magic Formula, I have to say I’m secretly hoping that it does have a statistically significant higher average yearly return than the S&P 500…

First, Some Assumptions…
Now in order to proceed, we have to make two assumptions. The first is that the yearly returns of these two investing techniques follow a mostly normal distribution. We’re supposed to feel comfortable making this assumption due to the central limit theorem. Briefly, it says that the distribution of any variable that is a function of a bunch of other random variables always tends to end up looking like a bell curve.

The second assumption we’ll make is that the returns of the Magic Formula and S&P 500 are not independent. The CFA study guide advises that when comparing two investing strategies covering the same period of time, the returns of both depend on the same underlying market and economic forces present at that time, and therefore have some things in common.

Step #1: Sample Mean Difference
The first step is to compute the average difference of the yearly returns between the two strategies. This turns out to be 19.04%.


Step #2: Standard Error of the Mean Difference
Next we compute the sample standard deviation of the difference (in Excel, use stdev). This comes out to be 22.24%. We transform this into standard error of the mean difference (SEMD) by dividing by the square root of the sample size. The years 1988-2004 comprise 17 years.


Step #3: Compute Test Statistic
For normal, or mostly normal distributions and small sample sizes we use the t-test to check for statistical significance. Basically this just gives us a number that we can compare to a t-test table in order to determine whether the difference we’re seeing between the Magic Formula and S&P 500 appears to be important given the sample size. The smaller the sample size, the greater the t-test hurdle our data will have to clear in order to be able to conclude that the two don’t have the same mean.

The test statistic is simply the sample mean difference minus the hypothesized mean difference, divided by the SEMD.


Step #4: Pick a Significance Level and Compare
Finally we need to decide on a level of significance and do our table compare. Most of these sorts of tests in the CFA curriculum seem to use 5%. This means that in our comparison, there will only be a 5% chance that there is in reality no difference between the Magic Formula and S&P 500, but we fail to detect this.

In addition to level of significance, the only other parameter we require to do our table look-up is the degrees of freedom. But that’s easy as it’s simply the sample size minus 1.

Given these two parameters we could now find a t-test table to do a critical value look-up, but this can be a little tedious, not to mention the additional step of converting one-sided to two-sided. I prefer to just use Excel’s tinv function.

tinv(probability,deg_freedom) = tinv(.05, 17-1) = 2.120

And since our t-value from Step #3 was 3.526, which is much > 2.120 we can easily reject the hypothesis that the means are equal at the 5% level of significance. Not only that, we can assume the means are different by as much as 7.59% before we start running up against our critical value of 2.120.


So the Magic Formula’s outperformance appears not to just be an artifact of small sample size, and also appears to be of significant magnitude.

But What About Risk?
Things are truly looking rosy for the Magic Formula. But a seasoned finance student will also compare the standard deviations of yearly returns in our first table up top and notice that the Magic Formula’s is higher (24.26% versus 17.87%). Standard deviation is a common (though debatable) quantifier for risk so it wouldn’t be uncommon to argue that the Magic Formula should have higher returns to compensate the investor for taking more risk.

Well just as we tested the hypothesis that the means were equal, we can do the same with variance (square of standard deviation)…

Step #5: Test Equality of Two Variances
I’ll cut to the chase here just to say that there’s a simple equivalent of the t-test when testing for the equality of two variances, and it’s called the F-test.

We come up with our F parameter by just computing the ratio of the two variances.


And again we could do a manual look-up using an F-table, but why not just let Excel compute it for us with finv….

finv(probability, deg_freedom1, deg_freedom2) = finv(.05, 16, 16) = 2.333

Therefore at the 5% significance level, because 1.843 < 2.333 we cannot reject the hypothesis that the variances are the same.

Summary
In conclusion, past performance may be no indication of future results… blah blah blah… The important thing is that the Magic Formula points toward having the best of both worlds, a statistically significant higher annual rate of return versus the S&P 500 without a statistically significant higher level of risk. Win-win!

It is interesting to see how confidence intervals allow us to tread into gray areas. A newbie might stop at Step #1 and claim that the Magic Formula beats the S&P 500 by an average of 19.04% per year. His antagonist might point to the small sample size and say that it makes that 19.04% estimate of mean… meaningless! But a statistician can state that he’s 95% sure that the Magic Formula outperforms the S&P 500 by at least 7.59% per year…. assuming normality.

And much is indeed hinging upon our assumption of normality, which can and should be tested for. But that’s for another day…










{ 3 comments… read them below or add one }

Gauss August 1, 2009 at 9:58 pm

Nothing is normal. Read your Taleb

Dave August 20, 2009 at 3:56 pm
Lumilog August 21, 2009 at 6:05 pm

Dave – wow , you work with Haugen! Is there an article in particular you thought I should check out?
-lumi

Leave a Comment

Previous post:

Next post: