Mean-Variance Optimization (MVO) as introduced by Markowitz (1952) is often presented as an elegant but impractical theory. MVO is "an unstable and error-maximizing" procedure (Michaud 1989), and "is nearly always beaten by simple 1/N portfolios" (DeMiguel, 2007). And to quote Ang (2014): "Mean-variance weights perform horribly… The optimal mean-variance portfolio is a complex function of estimated means, volatilities, and correlations of asset returns. There are many parameters to estimate. Optimized mean-variance portfolios can blow up when there are tiny errors in any of these inputs...".

In our opinion, MVO is a great concept, but previous studies were doomed to fail because they allowed for short-sales, and applied poorly specified estimation horizons.

**For example, Ang used a 60 month formation period for estimation of means and variances, while Asness (2012) clearly demonstrated that prices mean-revert at this time scale, where the best assets in the past often become the worst assets in the future**.

In this paper we apply short lookback periods (maximum of 12 months) to estimate MVO parameters in order to best harvest the momentum factor. In addition, we will introduce common-sense constraints, such as long-only portfolio weights, to stabilize the optimization. We also introduce a public implementation of Markowitz's Critical Line Algorithm (CLA) programmed in R to handle the case when the number of assets is much larger than the number of lookback periods.

We call our momentum-based, long-only MVO model Classical Asset Allocation (CAA) and compare its performance against the simple 1/N equal weighted portfolio using various global multi-asset universes over a century of data (Jan 1915-Dec 2014). At the risk of spoiling the ending, we demonstrate that CAA always beats the simple 1/N model by a wide margin.

## Bookmarks