A new empirical paper compares methods for estimating “beta”, i.e. the sensitivity of individual asset prices to changes in a broad market benchmark. It analyzes a large range of stocks and more than 50 years of history. The findings point to a useful set of initial default rules for beta estimation: [i] use a lookback window of about one year, [ii] apply an exponential moving average to the observations in the lookback window, and [iii] adjust the statistical estimates by reasonable theoretical priors, such as the similarity of betas for assets with similar characteristics.

Hollstein, Fabian, Marcel Prokopczuk, and Chardin Wese Simen (2017), “How to Estimate Beta?”

The post ties in with SRSV’s lectures on information efficiency.
The below are excerpts from the paper. Emphasis and cursive text have been added. 

The point of the analysis

“Researchers and practitioners face many choices when estimating an asset’s sensitivities toward risk factors, i.e., betas…We study the impact that these choices, e.g., different data sampling frequencies, estimation windows, forecast adjustments, and forecast combinations, have on estimates for beta.”

“We use a large cross-section of stocks and more than 50 years of data to comprehensively study the estimation of beta… We obtain daily data on stock returns, prices, and shares outstanding from the Center for Research in Security Prices (CRSP). We use all stocks traded on the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), and the National Association of Securities Dealers Automated Quotations (NASDAQ). We start our sample period I January 1963 and end it in December 2015.”

“To evaluate predictions for beta, we…use the realized beta…We evaluate the predictability for realized beta by computing the average root mean squared error (RMSE) of all approaches.”

The choice of lookback window

“An estimate based on a short historical window delivers a more timely conditional estimate. On the other, estimates based on a small sample are prone to measurement error. Starting with daily data, we find that the average value-weighted RMSE is highest for the 1-month horizon. It falls gradually up to the 12-month horizon and begins to rise again for longer estimation windows…We find that a historical window of 1 year typically yields the lowest average prediction errors.”

“We examine the impact of different weighting schemes. Conceptually, exponentially weighting past observations could deliver a possible solution to the conditionality versus sample size trade-off because one can…place higher weight on more recent observations to get a conditional estimate and use a long historical window to reduce measurement noise. ..To be precise, we estimate [the beta] with weighted least squares using the [exponential] weights…Indeed, we find that exponentially weighting the observations yields significantly more precise estimates for beta.”

“Option-implied beta estimators [which need no lookback]…work particularly well in predicting future betas. While the intrinsically forward-looking nature of option-based estimators seems to be favorable, the estimators face one important shortcoming. They are only applicable for a subset of large stocks with active options markets.”

The choice of frequency

“We find that the data frequency should be as high as possible, i.e., estimators based on daily data outperform those based on monthly or quarterly data…We find that low-frequency estimators, i.e., those based on monthly and quarterly data, yield very high average RMSEs, which are each significantly higher…about 80% of the time.”

“Using high-frequency data [of less than a day], betas can be estimated more precisely for the firms of the S&P 500. However, the same shortcoming as for option-implied estimators applies for estimators relying on high-frequency data: they are only reliable for the subset of the most liquid stocks.”

The choice of prior beliefs

“We examine the impact of imposing priors for the beta estimates. The idea behind this approach is that the beta estimate of a stock should not be too dissimilar to that of other stocks with similar characteristics. We find that the simple shrinkage adjustments [modification of statistical estimates by prior beliefs]…yield improvements for the simple historical estimator.”

“We obtain a posterior belief of beta by combining the historical estimate with a prior in the following way:… We use as priors (i) the cross-sectional average beta…(ii) the cross-sectional average beta of firms in the same… industry sector and (iii) the fundamentals-based prior of Cosemans et al. (2016)…The degree of shrinkage depends on the relative precisions of the historical estimate and the prior.”

“We investigate forecast combinations. We examine simple, regression-based, and Bayesian combinations. We find that a simple forecast combination of an exponentially weighted and a prior-based historical estimator yields the lowest average prediction errors overall. However, more elaborated combination approaches perform considerably worse, especially if we combine many individual models.”

Previous articleThe point of volatility targeting
Next articlePolicy rates and equity volatility
Ralph Sueppel is founder and director of SRSV Ltd, a research company dedicated to socially responsible macro trading strategies. He has worked in economics and finance for almost 25 years for investment banks, the European Central Bank and leading hedge funds. At present he is head of research and quantitative strategies at Macrosynergy Partners.