Saturday, October 18, 2008

A Better Demonstration

One of these days I'll write about a topic other than dollar-cost averaging. Today is not that day.

I think I've come up with a better way to demonstrate the futility of dollar-cost averaging that sidesteps some of the complexity we got mired in back in my earlier posts. It relies on computer simulation to compare a hypothetical dollar-cost-averaging approach to a "lump sum investment" approach.

Let's say we have a portfolio that is at this moment 100% cash and 0% stocks. You want to use dollar-cost averaging to transition your portfolio gradually to 100% stocks over some initial period of time. After that you are going to hold the all-stock portfolio for some period of time. You can pick the initial duration (the length of the "ramp up" period) and the "steady state" duration to be whatever you like.

My claim is that whatever values you select, I can construct a superior portfolio that takes a "lump sum investment" approach. This portfolio will start with X% of its funds allocated to stock and maintain exactly that percentage for the full duration - i.e., we have a lump sum investment at the very beginning of the experiment. This portfolio will be superior to the DCA portfolio because it will have an identical expected return, but the volatility - measured as the standard deviation of the observed returns - will be lower.

As you might guess, X will be chosen so that the average allocation to stocks - across the entire time period - is the same for the lump sum investment approach as for the DCA approach.

My main assumption is that the return of stocks over a single unit of time is drawn from a normal distribution and is independent of the return of stocks over any other unit of time.

I ran the simulation with these parameters, but, again, the assertion is that I could choose essentially any set of values:
  • Expected stock return: 0.01 (meaning expected appreciation of stocks over one unit of time is 1%)
  • Standard deviation of stock returns: 0.03
  • Return on cash: 0 (with no variability)
  • Length of "ramp up" period: 10 time units
  • Length of "steady state" period: 100 time units
  • Starting value of portfolio: $1000
For a given run, I simulated each portfolio 1,000,000 times. I calculated the means of the final portfolio values and the standard deviations. Over thirty runs, I observed the following range of values:

Dollar-cost averaging:
Means: 2833.938-2836.723
Stddevs: 879.314-883.154
Lump-sum investment:
Means: 2834.745-2837.861
Stddevs: 863.836-866.589

As you can see, the means fall very close to each other, but the standard deviations of the lump-sum approach are lower. Perhaps not hugely lower, but clearly there is a difference that is not random noise.

4 comments:

Unknown said...

"This portfolio will start with X% of its funds allocated to stock and maintain exactly that percentage for the full duration."

So you're constantly rebalancing, which will cause you to sell stock when it's high(er) and buy it when it's low(er). That is dollar cost averaging. No wonder it works so well. :)

You should keep the amount of cash constant, not the percentage of cash.

Dangerhorse said...

Rebalancing is not the same as dollar-cost averaging. Dollar-cost average is a progressive shift of money from one asset class to another. Over time, your allocation to the target asset class will increase, whether measured in absolute dollar terms, or whether measured in percentage terms (barring something drastic happening like a stock market crash). Rebalancing in general means buying and selling to hold an allocation to asset classes (expressed in percentage terms) fixed.

"You should keep the amount of cash constant, not the percentage of cash."

Wouldn't make sense given what I'm trying to do. Hard to respond more than that. Doesn't seem like you understand the structure of the argument.

Unknown said...

Great posts! It is really fun thinking about all the probabilities and implications.

I'm curious how you might alter the strategy and the simulations under the following circumstances:

A) The ramp up period is much less than the hold period. E.G. Ramp for a month, hold for 10 years.

B) Every transaction costs some fixed quantity.

I'd be more than willing to run some simulations...

Dangerhorse said...

I'm confident that the length of the ramp-up period as compared to the holding period doesn't matter. Although I haven't proven that here (or rerun the simulation), so take that with a grain of salt. You can also look at Constantinide's argument (my next post) for a more general refutation of DCA.

Adding in transaction costs would of course just strengthen the case against DCA.