Today I’m just posting some starter Matlab code for those wishing to dabble with Modern Portfolio Theory (MPT). MPT is an elegant, academic solution to the age-old asset allocation problem: *“what % of my money should I put into each stock or mutual fund available to me when building my portfolio?” *

And like most academic theories, MPT is a vast oversimplification of the real world. Asset class returns are assumed to follow stationary, time-independent, bell-curve distributions with a stable mean and variance. Correlations between asset classes are fixed. Risk simply equals volatility. All swans are white.

These pie-in-the-sky assumptions don’t necessarily make the results of MPT simulations useless. You just have to use them as a **starting place** for developing your own portfolio, not an exact formula.

The goal of an MPT simulation is to get the computer to essentially try **all possible weighted combinations** of the various asset classes in order to find the select few that give the biggest bang for the buck. That would be the set of “efficient” portfolios whose weighted combinations give you the highest expected return for a given variance or standard deviation (i.e. risk).

It was always a little confusing to me why MPT assumed that **standard deviation was risk**, because risk means different things to different people. I personally don’t get irked by yearly portfolio swings since I consider myself a long-term investor. So why should I care about volatility as long as my average return is good? Isn’t risk more about the probability of a big loss? Or probability of not meeting my retirement goals?

I never found the official MPT answer to this question, but I think I know what it is. Consider two portfolios that have the same average return. One has low volatility so its returns for two years might be 9% and 11%, which averages to be 10%. The second portfolio has high volatility and thus two years of its returns might be -10% and 30%, which also averages to 10%.

The reason that the lower volatility portfolio is “better” is that the returns we earn over time are **compounded** (geometric average, not arithmetic). The low volatility portfolio gives a compounded return of (1.09 x 1.11) = 1.21 or 21%. The high volatility portfolio gives a compounded return of (0.9 x 1.3) = 1.17 or 17%. So maybe it’s not so much the **emotional** aspect of how much volatility you can stomach, but rather the **ice cold logic** that more volatility for the same average return equals lower compounded return. Mystery solved?

So here is a link to the Matlab code. Lines 11-16 give you a starter historical database of returns with 3 asset classes: a US stock index, a US bond index, and an international stock index (Pacific Rim).

Each time you run the simulation it will try various asset class weightings to see if it can find any more efficient portfolios than it has found so far. The sorted results are found in the array **stats_sort**.

Column 1 is the portfolio average return.

Column 2 is the portfolio standard deviation.

Columns 3 to end contain the asset class weights.

I use two parts to the optimization problem on lines 40-48. Half the time we try purely **random** weightings. The other half we just make a small adjustment to one of our existing “best portfolios so far” to see if the **tweak** results in something even more efficient. The random part keeps us from converging to a local (but not global) min or max. The tweak part speeds convergence.

Emotion does come into play after we find all of our efficient portfolios and then have to decide where our personal risk tolerance lies. Of course you have no real feeling for whether you’re a 5.4% or 14.9% on the volatility scale. But when you examine the portfolios for each stop along the efficient frontier, you should be able to get a sense for which portfolios “feel” too risky and which “feel” too conservative. And if the returns really are bell-curved, 2/3 of them should fall within +/- 1 standard deviation from the mean. So that helps to estimate the probable range of returns for a given portfolio.

The best part of the exercise, if you mix in a bunch of historical data for a bunch of different asset classes, is examining the portfolios **in between** the corner cases. You know that the highest risk / return portfolio is going to be almost 100% emerging markets. And the lowest risk will be mostly money market and treasuries. But you might be surprised at which asset classes that you thought were important that end up consistently getting 0% weightings! And vice versa – in my own testing I was particularly shocked at how important commodities seemed to be.

The exercise that is left to the student is to expand my starter database to a larger set of asset class returns. Vanguard provides annual returns for their indexes going back about 15 years. Assetplay.net’s backtest engine has returns going back to 1972 for 23 different asset classes. Index Fund Advisers also has historical returns too. And MSCI Barra probably has the most exhaustive index list, complete with a link to download each index’s returns in Excel format.

One last thing, on line 20 I have a paramater called **max_alloc** which is set to 1 (for 100%). This is the largest weighting allowed to any asset class. Mean-variance optimization can often yield highly concentrated portfolios, as it may over-emphasize asset classes with attractive historical returns. This parameter just gives you another knob to tweak in order to get a more diversified portfolio. You might want to set it to something like 0.25 for example to make sure that no single asset class ends up comprising more than 25% of each efficient portfolio. It’s my understanding that the endowment manager of Yale (David Swensen) uses something like this.

Have fun and let me know if you find anything interesting!