CFA Tools: Test Concerning Mean Difference

The CFA program is great for getting a wide survey of techniques you might want to add to your investing toolbox. Today’s material is based on Volume 1, Reading 11, Section 3.3 of the CFA Level I 2008 Curriculum.

So I was examining the CFA Institute’s table of historical pass rates for the exams, when something caught my attention.

CFA Level 1 Historical Pass Rates

With the exception of 2004, the Level 1 June pass rate has always equaled or exceeded December’s. Is this a fluke? Something new candidates should take into account when signing up?

I remembered examining problems like this while studying the Quantitative material of Level 1. They’d give you something like 10 years of a mutual fund’s returns vs. the S&P 500, and the mutual fund had outperformed by some small 1-2% per year on average, but you needed to figure out whether the outperformance was statistically significant, or whether it was instead just an artifact of the small sample size.

So brace yourself, we’re going to use Level 1 Quantitative techniques to interpret Level 1 pass rates. Just like feeding a snake its own tail!

Let’s fire up a spreadsheet and begin with the pass rates…

CFA Pass Rates for Level 1

…and then compute the individual yearly differences (line 5), average difference (line 7), and the standard deviation of the differences (line 8).

What we do next… depends. Basically there are a handful of ways you can test statistical significance in mean differences depending upon whether you think the two sample sets have the same or different variances, and whether you believe the two sample sets are somewhat related or independent. In all cases, we assume that the sample sets are normally distributed around their means.

I’m going to pick the method for a test of mean differences with unknown population variances and some assumed dependence between the sample sets. The assumed dependence comes about from the fact that Level 1 June and Level 1 December are testing from the same body of material.

We already found our sample standard deviation of the difference (sigma) above. The next step is to compute the standard error of the mean difference. We have 9 sample points so this is simply:

Next, we use that SEMD to compute a test statistic (t) for the mean difference (mu) vs. the null hypothesis that there is no difference (0).

Almost there. Now, just like a jury in a courtroom (grabbed that from the CFA text!), we have to select just how much evidence we’re going to require in order to throw out the conjecture that the average pass rates for June and December are the same. Let’s use statistics wording:

H0 = null hypothesis = pass rates are the same
Ha = alternative hypothesis = pass rates are different

The more strict we are, the higher the likelihood that we won’t reject the hypothesis that there’s no difference in June / December pass rates, when in fact there is a difference (Type II error). But if we’re too lenient, we risk concluding that the pass rates are different, when in fact they’re not (Type I Error).

Level of significance is probability of a Type I error, but which one to select? 1%? 5%? 10%? Let’s get some help from the CFA text…

“If we can reject a null hypothesis at the 0.1 level of significance, we have some evidence that the null hypothesis is false.”

“If we can reject a null hypothesis at the 0.05 level of significance, we have strong evidence that the null hypothesis is false.”

“If we can reject a null hypothesis at the 0.01 level of significance, we have very strong evidence that the null hypothesis is false.”

So let’s go to a t-table and look up the row for 8 degrees of freedom (which is our sample size minus 1).

Remembering to convert single tail to double tail, our test statistics for 10%, 5%, and 1% are 1.860, 2.306, and 3.355 respectively.

So, is there very strong evidence that there is a difference in the pass rates? We grab our t from above (1.86) and see if it’s > 3.355. Nope.

Well, is there just strong evidence that there is a difference in the pass rates? Is 1.86 > 2.306? Nope.

Then, is there at least some blasted evidence that there is a difference in the pass rates? Is 1.86 > 1.860?

Ooooooh, doesn’t that just get your goat? What do we do if they’re equal?! According to the CFA text, our t officially has to be > 1.860 to reject the hypothesis that the pass rates are the same.

But there’s nothing magic about the exact 10% level of significance, and who knows if the pass rates really are normally distributed about the mean (kurtosis computation for extra credit).

If I boil all this down I think it’s safe to conclude that there’s a bit of evidence that June pass rates might be on average higher than December’s, but it’s a really close call. 2009 in particular might be skewing things. Looks like after the fear of a total collapse passed, people got comfortable and stopped studying!

2009 S&P 500

Unfortunately there’s only one thing you can do to simultaneously reduce the both the probabilities of making a Type I and Type II error, and that’s to increase the sample size. I’ll try to remember to update our statistics as data points from new years roll in.

UPDATE: 2012 Level 1 Pass Rates: June 38% December 37%
T-statistic moves to 1.935, June’s pass rate now statistically different than December’s at 10% level of significance

I wrote an eBook about the CFA Program. If you’re interested, click here.

2 thoughts on “CFA Tools: Test Concerning Mean Difference”

  1. Nikhil – that’s not what I’m seeing in the text, though they did have the occasional error. It says compute sample standard deviation first (dividing by n-1, which is what “stdev” does in the spreadsheet) and then divide that by square root of sample size, n. Let me know if this still looks wrong to you – and thanks for the second pair of eyes.

Leave a Reply

Your email address will not be published.