Today I’m just posting some starter Matlab code for those wishing to dabble with Modern Portfolio Theory (MPT). MPT is an elegant, academic solution to the age-old asset allocation problem: *“what % of my money should I put into each stock or mutual fund available to me when building my portfolio?” *

And like most academic theories, MPT is a vast oversimplification of the real world. Asset class returns are assumed to follow stationary, time-independent, bell-curve distributions with a stable mean and variance. Correlations between asset classes are fixed. Risk simply equals volatility. All swans are white.

These pie-in-the-sky assumptions don’t necessarily make the results of MPT simulations useless. You just have to use them as a **starting place** for developing your own portfolio, not an exact formula.

The goal of an MPT simulation is to get the computer to essentially try **all possible weighted combinations** of the various asset classes in order to find the select few that give the biggest bang for the buck. That would be the set of “efficient” portfolios whose weighted combinations give you the highest expected return for a given variance or standard deviation (i.e. risk).

It was always a little confusing to me why MPT assumed that **standard deviation was risk**, because risk means different things to different people. I personally don’t get irked by yearly portfolio swings since I consider myself a long-term investor. So why should I care about volatility as long as my average return is good? Isn’t risk more about the probability of a big loss? Or probability of not meeting my retirement goals?

I never found the official MPT answer to this question, but I think I know what it is. Consider two portfolios that have the same average return. One has low volatility so its returns for two years might be 9% and 11%, which averages to be 10%. The second portfolio has high volatility and thus two years of its returns might be -10% and 30%, which also averages to 10%.

The reason that the lower volatility portfolio is “better” is that the returns we earn over time are **compounded** (geometric average, not arithmetic). The low volatility portfolio gives a compounded return of (1.09 x 1.11) = 1.21 or 21%. The high volatility portfolio gives a compounded return of (0.9 x 1.3) = 1.17 or 17%. So maybe it’s not so much the **emotional** aspect of how much volatility you can stomach, but rather the **ice cold logic** that more volatility for the same average return equals lower compounded return. Mystery solved?

So here is a link to the Matlab code. Lines 11-16 give you a starter historical database of returns with 3 asset classes: a US stock index, a US bond index, and an international stock index (Pacific Rim).

Each time you run the simulation it will try various asset class weightings to see if it can find any more efficient portfolios than it has found so far. The sorted results are found in the array **stats_sort**.

Column 1 is the portfolio average return.

Column 2 is the portfolio standard deviation.

Columns 3 to end contain the asset class weights.

I use two parts to the optimization problem on lines 40-48. Half the time we try purely **random** weightings. The other half we just make a small adjustment to one of our existing “best portfolios so far” to see if the **tweak** results in something even more efficient. The random part keeps us from converging to a local (but not global) min or max. The tweak part speeds convergence.

Emotion does come into play after we find all of our efficient portfolios and then have to decide where our personal risk tolerance lies. Of course you have no real feeling for whether you’re a 5.4% or 14.9% on the volatility scale. But when you examine the portfolios for each stop along the efficient frontier, you should be able to get a sense for which portfolios “feel” too risky and which “feel” too conservative. And if the returns really are bell-curved, 2/3 of them should fall within +/- 1 standard deviation from the mean. So that helps to estimate the probable range of returns for a given portfolio.

The best part of the exercise, if you mix in a bunch of historical data for a bunch of different asset classes, is examining the portfolios **in between** the corner cases. You know that the highest risk / return portfolio is going to be almost 100% emerging markets. And the lowest risk will be mostly money market and treasuries. But you might be surprised at which asset classes that you thought were important that end up consistently getting 0% weightings! And vice versa – in my own testing I was particularly shocked at how important commodities seemed to be.

The exercise that is left to the student is to expand my starter database to a larger set of asset class returns. Vanguard provides annual returns for their indexes going back about 15 years. Assetplay.net’s backtest engine has returns going back to 1972 for 23 different asset classes. Index Fund Advisers also has historical returns too. And MSCI Barra probably has the most exhaustive index list, complete with a link to download each index’s returns in Excel format.

One last thing, on line 20 I have a paramater called **max_alloc** which is set to 1 (for 100%). This is the largest weighting allowed to any asset class. Mean-variance optimization can often yield highly concentrated portfolios, as it may over-emphasize asset classes with attractive historical returns. This parameter just gives you another knob to tweak in order to get a more diversified portfolio. You might want to set it to something like 0.25 for example to make sure that no single asset class ends up comprising more than 25% of each efficient portfolio. It’s my understanding that the endowment manager of Yale (David Swensen) uses something like this.

Have fun and let me know if you find anything interesting!

Sencha71 (? is that your name),

I’ve been following your blog for about a year plus. The peach of the posting was your travails for Level I, I read it out to my mom and she was burst out laughing. (Flunked L1 on first time, got it right the 2nd time, am with u on this L2 June 2010).

By the way, I would also include myself as having some DSP scars of wars. Mine was the TI 6200 and later on the SHARC bits from AD. Some fond memories remembering the SHARC, if memory serves me right (quote from Chairman Kaga in Iron Chef), I remember setting up two DAG counters and using them to get the memory address for the data bits. Well it was a long time – 2001-2002.

And so I find myself in the same boat with you. Ex DSP (embedded variant) and now CFA wannabe. Right now, I’ve fully dissolved myself from the engineering side, I do a bit of political blogging for some up coming Senators (sort of) guys in Malaysia and I use CFA material all the time to kick ass.

Just a note to tell you best of luck, and if you ever find the time, do drop by at mine. Some of the postings is in my national language, but some are in English too.

Cheers

Wenger J. Khairy

Cybertrooper

Hi Wenger – thanks so much for taking the time to write. So glad you enjoyed the Level One posts – I wish my current workload didn’t preclude me from putting the same time & effort into blogging about Level Two.

Funny that we’re both DSP guys. I always thought that after a few months studying the stock market, I’d be making a killing using something fancy like neural networks or wavelet transforms over historical price data to identify and exploit inefficiencies. But as Einstein said, not everything that counts can be counted and not everything that can be counted counts. So I find myself in the CFA program and reading 10-K’s and 10-Q’s!

Your blog looks great and I’m adding you to my list of links. Wishing you lots of luck for Level 2 – let us know how you did come August.

-lumi

Hi !

I am just writing in order to thank you for your blog. I am currently studying the MVP theory and your blog as well as your m file helped me a lot in understanding the outcomings of this theory.

Chuck – thanks so much for writing and glad it was insightful. Good luck in your studies! – lumi

you have two constraint equations. i.e. the weights sum to one and the return fractions sum to the expected portfolio return. you could remove the return equation and replace it with beta fractions summing to the portfolio beta …. then the weights aren’t dependent on the estimated equity returns and the portfolio remains balanced.

good luck on the tests man

Hi Lumi,

I’m currently playing around with your MPT code. So I tried to enlarge the sample you have by adding another Index. Turns out that it’s not as easy as one might think… Can you please tell me where I do have to change the code so that the code works for more asset classes as well?

Cheers & thanks a lot,

Nora

hi nora,

you only need to do 2 things. add the new row of returns under line 15 and then give the new asset class a name on the line under that.

for example, under line 15, I’ll add some random data (you would enter the asset class returns) like so:

rors(4,:) = rand(size(rors(3,:)));

And then on the asset class line, I added an extra entry to give it a name like so:

asset_classes = {‘S&P 500 Index’, ‘Barclays US Aggregate Bond Index’, ‘Pacific Rim Index’, ‘Sample Asset Class’};

Hope that helps!

That indeed does help a lot!

I’ll try when I have time, thanks for your support!

Love,

Nora

Hi,

I tried to enlarge the sample and added the line rors(4,:) = rand(size(rors(3,:))); ( I tried to add a row with returns too) and added an Asset to the asset_classes, as you advised.

But I’m getting this error:

??? Error using ==> times

Matrix dimensions must agree.

Error in ==> optimze at 58

for j=1:size(rors,2), comb_ror(j) = sum(weights .* rors(:,j)); end

What am I doing wrong? Or is there something else to to?

Greetings

Michael

ups…my mistake. Loading an existing stats.dat after adding a new time series doesn’t work, obviously.

Hi Lumilog.

I possibly found a small bug.

I have changed line 71, from

idx_remove = [idx_rem1; idx_rem2];

to:

idx_remove = [idx_rem1, idx_rem2]’;

To prevent your beautiful code from crashing due an error from “vertical cat”….

Again, a beautiful code. You are great. Continue beeing competent.

The world needs competent people as you.

Greetings

A.

thanks alberto ðŸ™‚ interesting that it never caused me problems but could be due to different matlab versions. thanks for sharing your update.