Abstract
Performance of investment managers is predominantly evaluated against targeted benchmarks, such as stock, bond or commodity indices. However, most professional databases
do not retain timeseries for companies that disappeared, and do not necessarily track the change of constitution in these benchmarks. Consequently, standard tests of performance suffer from the “look-ahead benchmark bias,” where a given strategy is naively back-tested against the assets constituting the benchmark of reference at the end of the testing period (i.e. now), rather than at the very beginning of that period.
We report that the “look-ahead benchmark bias” can exhibit a surprisingly large amplitude for portfolios of common stocks (up to 8% per annum for the S&P500 taken as the benchmark), while most studies have emphasized related survival biases in performance of mutual and hedge funds for which the biases can be expected to be even larger. We use the CRSP database from 1926 to 2006 and analyze the running top 500 US capitalizations to demonstrate that this bias can account for a gross overestimation of performance metrics such as the Sharpe ratio as well as an underestimation of risk, as measured for instance by peak-to-valley drawdowns. We demonstrate the presence of a significant bias in the estimation of the survival and look-ahead biases studied in the literature. A general methodology to test the properties of investment strategies is advanced in terms of random strategies with similar investment constraints.