HomeInformation EfficiencyTen things investors should know about nowcasting

Ten things investors should know about nowcasting

-

Nowcasting in financial markets is mainly about forecasting forthcoming data reports, particularly GDP releases. However, nowcasting models are more versatile and can be used for a range of market-relevant information, including inflation, sentiment, weather, and harvest conditions. Nowcasting is about information efficiency and is particularly suitable for dealing with big messy data. The underlying models typically condense large datasets into a few underlying factors. They also tackle mixed frequencies of time series and missing data. The most popular model class for nowcasting is factor models: there are different categories of these that produce different results. One size does not fit all purposes. Also, factor models have competitors in the nowcasting space, such as Bayesian vector autoregression, MIDAS models and bridge regressions. The reason why investors should understand their nowcasting models is that they can do a lot more than just nowcasting: most models allow tracking latent trends, spotting significant changes in market conditions, and quantifying the relevance of different data releases.

The below summary is a based on a range of sources that are linked next to the quotes.

Cursive text and text in brackets have been added. Some mathematical formulas have been paraphrased in natural language.

This post ties in with this site’s summary on quantitative methods and macro information efficiency.

1. Nowcasting is actually forecasting

The main purpose of nowcasting is forecasting near-term information flow. Specifically, it is an automated process for predicting what forthcoming data reports may show, based on advanced information and an appropriate dynamic model. The process is called nowcasting because the predicted report typically covers events that include the present, the near future, or the near past. However, nowcasting does not generally track concurrent fundamental states of the economic or market expectations thereof.

“The basic principle of now-casting is the exploitation of the information which is published early and possibly at higher frequencies than the target variable of interest in order to obtain an ‘early estimate’ before the official figure becomes available.” [Bańbura, Giannone, Modugno and Reichlin]

2. Nowcasting has wide application in economics, finance, and beyond

The main application of nowcasting in financial markets is the prediction of GDP reports, based on previously released higher-frequency data. Many investors, therefore, use the term nowcasting synonymously with tracking current growth. However, nowcasting has actually been developed in meteorology and can be applied to a range of conditions that are relevant for markets, such as labor markets, inflation, weather, and agricultural harvest conditions (including specifics such as plant-available soil water!).

“Now-casting is relevant in economics because key statistics on the present state of the economy are available with a significant delay. This is particularly true for those collected on a quarterly basis, with Gross Domestic Product (GDP) being a prominent example…[However,] now-casting can also be meaningfully applied to other target variables revealing particular aspects of the state of the economy and thereby followed closely by markets. An example is labor market variables.” [Bańbura, Giannone, Modugno and Reichlin]

“Our nowcasting model that aims to provides more timely information about measured unemployment. At monthly intervals during each quarter the…model predicts the headline unemployment rate prior to the release [using] high frequency administrative data on benefit receipt and filled jobs.” [Rea and Maloney]

“Our inflation nowcasts are produced with a model that uses a small number of available data series at different frequencies, including daily oil prices, weekly gasoline prices, and monthly CPI and PCE inflation readings. The model generates nowcasts of monthly inflation, and these are combined for nowcasting current-quarter inflation.” [Federal Reserve Bank of Cleveland]

3. Nowcasting is about information cost saving

In economics and finances, there are too many potentially relevant high-frequency indicators to keep track of. Almost every day reveals high-frequency data that are related to evidently important lower-frequency reports, such as corporate earnings, national accounts, or harvests. Nowcasting models allow settling on a methodology upfront, feeding many data regularly into one pipeline and then tracking only a few or even a single estimated indicator. This mimics the way in which financial markets have long evaluated the data flow but requires less ad-hoc, intuitive, or chart analysis.

“The Federal Reserve Bank of New York…nowcasting model extracts the latent factors that drive the movements in the data and produces a forecast of each economic series…When the actual release for that series differs from the model’s forecast, this ‘news’ impacts the nowcast of GDP growth. This approach formalizes key features of how market participants and policymakers have traditionally produced forecasts, a process that involves monitoring many data releases, forming expectations about them, and then revising the assessment of the state of the economy whenever facts di
er from those expectations. The model combines in a unified framework a variety of approaches developed over time for monitoring economic conditions.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

4. Nowcasting relies on models that automate the handling of “big data”

The challenge for financial markets is handling large and increasing quantities of data. Nowcasting effectively is just a name for a range of models that automate the handling of a broad range of data series and their condensation into a small set of informative data points. The trick is to know what information to retain and what to discard.

“Big data was a challenge to macroeconomists well before the collection of more granular data became pervasive in other disciplines…[The] trickling of information over time is often referred to as the data flow, but it is actually less smooth than the term might suggest…In general, indicators released closer to their reference period are bound to be less accurate. Given the number of these releases, and the hundreds of statistics that they often include, designing an approach…of accurately tracking the evolution of the economy in real time… is a big data challenge….
Real-time monitoring of macroeconomic conditions has become the full-time job of dedicated economists at central banks, government agencies and the corporate world, who sift through big and complex data to distil all relevant information…New methodologies in time-series econometrics developed over the past two decades have made possible the construction of automated platforms for monitoring macroeconomic conditions in real time…Because of the emphasis on the present, they dubbed it ‘nowcasting,’ a term originally used in meteorology for forecasting the weather in the present and in the next few hours.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

“From an econometric perspective, estimation is challenging whenever the number of parameters is large relative to the number of observations…Modelling the interaction among a large number of variables leads to a proliferation of parameters: that implies large estimation uncertainty which makes the results from traditional tools unreliable and unstable. This fact is often referred to as the ‘curse of dimensionality’. The modeler faces a trade-off between excessive simplicity (leading to misspecification) and excessive complexity (leading to instabilities). The econometrics of big data aims at turning the curse of dimensionality into a blessing by capturing in a parsimonious manner the salient features of the interactions among many series.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

5. Most nowcast models clean up a big data mess through “dimensionality reduction”

Statistically, dimensionality reduction is simply the process of reducing the dimension of the feature (explanatory variables) set. The relevant state of the world is expressed in a few condensed rather than many original series. This means that rather than following a wide array of data series the model focuses on a few underlying factors that explain most of the variation in all these data.

“Indexes of economic indicators have been constructed using dynamic factor models, which…amount essentially to using model-based aggregation schemes…the use of factor models to monitor macroeconomic conditions stems from the basic insight that information about different aspects and sectors of the economy can be considered as imperfect measures of…a few common factors…Dynamic factor models build on this basic fact to provide a parsimonious and yet suitable representation for the macroeconomic series; they are one of the main tools that macroeconomists today use to handle big data” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

“Predictors are summarized using a small number of indexes constructed by principal component analysis. An approximate dynamic factor model serves as the statistical framework for the estimation of the indexes and construction of the forecasts…The approximate dynamic factor model…relates the variable to be forecast to a set of predictors...Forecasting is carried out in a two-step process: first the factors are estimated by principal components using the predictors, then these estimated factors are used to forecast [the target variable]. Focusing on the forecasts implied by the factors rather than on the factors themselves permits sidestepping the difficult problem of identification inherent in factor models.” [Stock and Watson]

“Bridge equations are regressions of quarterly GDP growth on a small set of pres-elected key monthly indicators. This simple modelling strategy has been popular among policy institutions…An alternative way to exploit large information [sets] consists in combining predictors in few common factors which are then used as regressors in bridge [regressions] via the Kalman filter.” [Angelini, Camba-Méndez, Giannone, Rünstler and Reichlin]

6. Nowcasts also deal with the mixed frequencies and “jagged edge” of data series

Underlying data series for nowcasters have different frequencies and come with different lags to the observed period. Abstractly, both result in missing data, whose handling depends on the nature of the data. There is a range of estimators for the missing observations. The most elegant (albeit not necessarily the most practical) is based on the Kalman filter.

“The need to use timely information from various sources has a number of implications regarding the features of the information set underlying now-casts.

  • First, it could contain data sampled at a wide range of frequencies, from daily to annual. It is important to understand the relation between the high frequency, which for key economic concepts are unobserved, and the corresponding observed low-frequency series. The relation depends on whether the corresponding indicator is a flow or a stock variable.
  • Second, as different types of data are released in a non-synchronous manner and with different degrees of delay, the time of the last available observation differs from series to series, which results in a so-called ‘ragged’ or ‘jagged’ edge.” [Bańbura, Giannone, Modugno and Reichlin]

“The Kalman filter and smoother provide conditional expectations of the state vector on the information set…Importantly, the Kalman filter and smoother can efficiently deal with any missing observations [in low-frequency target series] and provide the conditional expectation for those. Consequently, nowcasts or forecasts can be easily obtained for the target variable and for the predictors. As in this framework the problems of mixed frequency and ragged edge are essentially missing data problems, they are easily solved by Kalman filter and smoother apparatus.” [Bańbura, Giannone, Modugno and Reichlin]

“Consider a state-space representation of a factor model, treating quarterly series as monthly series with missing observations…We [can] fill in missing observations with independently identically distributed draws from the standard normal distribution independent of the model parameters and rewrite the state-space model accordingly, so that we can apply the Kalman filter to evaluate the likelihood function.” [Mariano and Murasawa]

“In practice…one encounters various data irregularities, including occasionally missing observations, unbalanced panels, and mixed frequency (for example, monthly and quarterly) data. In this case, a modification of standard principal component estimation is necessary.

  • Suppose some observations [of the predictor data set] are missing. Then, during iteration, the elements of the estimated balanced panel are constructed as the available values if observed and as a prediction [based on estimated principal] otherwise. The estimate of principal components is then updated.
  • A series that is observed quarterly and is a stock variable would be the point-in-time level of a variable at the end of the quarter, say, the level of inventories at the end of the quarter…It is treated as a monthly series with missing observations in the first and second months of the quarter [which are estimated as above].
  • A quarterly flow variable is the average (or sum) of unobserved monthly values…It can be treated as follows: The unobserved monthly series is measured only as the time aggregate [i.e. sum or average] for the end-of-quarter months and missing for all other values. [Estimation of the missing variables is similar to above, albeit under consideration of the aggregation].” [Stock and Watson]

7. The most popular model class for nowcasters is factor models

Factor models have been the standard dimensionality reduction technique for a century but have become more relevant with the rise of big data. The simple underlying idea is that a wide range of observed variables is governed by a small set of factors.

Factor models are a dimension reduction technique that exists in the statistical literature since almost one hundred years and were originally introduced in the filed of psychometrics. The recent availability of large datasets made them increasingly popular in the last twenty years. They are nowadays commonly used by public and private institutions as central banks and investment banks for analysing large panels of time series. Recently a class of models known as approximate dynamic factor models has been proposed in the literature can be considered as a pioneer dimension reduction technique in ‘big data econometrics’.” [Barigozzi]

Factor models have a long tradition in the statistical and econometric literature. However, the application to big data is relatively recent. The earliest contributions…introduced principal components estimators for large dynamic factor models in economics…A dynamic factor model assumes that many observed variables are driven by a few unobserved dynamic factors while the features that are specific to individual series, such as measurement errors, are captured by idiosyncratic errors.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

8. There are various types of factor models that are used for nowcasting

To understand a nowcaster that uses a factor model, one should read the ‘small print’. Different models can produce very different results. There are two key questions that help classifying a nowcasting factor model. First, do factors only have a contemporaneous effect on observed data (features) as opposed to lagged relations? If the answer is yes, it is a static factor model; if no, it is a dynamic factor model. Second, do the factors explain all correlation between the observed features? If the answer is yes, one has an exact factor model; if it is no, one has an approximate factor model. For example, the popular factor model version with principal components analysis for dimensionality reduction and subsequent bridge regression for forecasting the target variable would be a static approximate factor model.

“In general, a factor model for a high-dimensional vector of time series is characterized by: [1] few latent factors, representing comovements, and [2] idiosyncratic terms, representing measurement errors or individual/local features…we always assume that common and idiosyncratic component are uncorrelated…

  • [The] static versus dynamic…distinction is based on the effect of the factors on the data. In the static factor model…the factors…have only a contemporaneous effect on the observed features. They are called static factors…In the dynamic factor model the factors have effect on the observed features through their lags too. They are called dynamic factors.
  • Another important distinction between classes of factor models is…exact versus approximate…related to the idiosyncratic component [of the observed features]. In the exact factor model, the vector of idiosyncratic components has no cross-sectional dependence…It has a diagonal covariance matrix…In the approximate factor model, the vector of idiosyncratic components is allowed to have mild cross-sectional dependence, thus a covariance matrix which is not necessarily diagonal…The approximate dynamic factor model is the most realistic but it is hard to deal with.” [Barigozzi]

“The exact factor model was introduced by Spearman (1904). The model assumes that the idiosyncratic components are not correlated While the core assumption behind factor models is that the two processes, factor process and idiosyncratic component are orthogonal to each other, the exact static factor model further assumes that the idiosyncratic components are also orthogonal to each other, so that any correlation between the observable variables is solely due to the common factors. Both orthogonality assumptions are necessary to ensure the identifiability of the model.
Approximate factor models…relax…[the] a very strict assumption of no cross-correlation between the idiosyncratic component. They allow the idiosyncratic components to be mildly cross-correlated and provide a set of conditions ensuring that approximate factor models are asymptotically identified…Although originally developed in the finance literature, the approximate static factor model made its way into macroeconometrics in the early 2000’s.” [Doz and Fuleky]

“[One can] extend approximate factor models by considering dynamic factor models of large dimensions and introduce different methods for the estimation of this type of model. These models are referred to as ‘generalized’ because they combine both dynamic and approximate structures, i.e., they generalize exact dynamic factor models by assuming that the number of variables tends to infinity and by allowing idiosyncratic processes to be mutually correlated.” [Barhoumi, Darne, and Ferrara]

“In a dynamic factor model, each series is modeled as the sum of two orthogonal components: the first, driven by a handful of unobserved factors captures the joint dynamics and the second is treated as an idiosyncratic residual…The most common version in the context of now-casting specifies that the high-frequency variables have a factor structure and that the factors follow a vector autoregressive process…The now-casts are then obtained via a regression…on temporally aggregated factor estimates.” [Bańbura, Giannone, Modugno and Reichlin]

Dynamic factor models are particularly suitable for nowcasting and monitoring macroeconomic conditions in real time. This is because these models are naturally cast in a state-space form and hence inference can be performed using Kalman filtering techniques, which in turn provide a convenient and natural framework for handling the irregularities of the data in real time (i.e., mixed frequencies and non-synchronicity of the data releases) and updating the predictions. Indeed, the Kalman filter digests incoming data in a coherent and intuitive way: it updates the predictions of the model recursively by weighting the innovation components of incoming data on the basis of their timeliness and their quality. Moreover, as the model produces forecasts for all variables simultaneously, the analysis of the flow of data does not require piecing together many separate, unrelated models.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

“In dynamic factor models using likelihood based methods and Kalman filtering techniques, the common factors and the idiosyncratic components are modeled as Gaussian autoregressive processes, which account for their serial correlation and persistence…In practice, the estimates can be conveniently computed iteratively using the Kalman smoother and expected-maximization algorithm. The algorithm is initialized by computing principal components, and the model parameters are estimated by OLS regression, treating the principal components as if they were the true common factors. This is a good initialization especially with big data given that principal components are reliable estimates of the common factors. In the second step, given the estimated parameters, an updated estimate of the common factors is obtained using the Kalman smoother. Stopping at the second step gives the two-step estimate of the common factors.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

“Maximum likelihood has a number of advantages compared to the principal components and the two-step procedure. First it is more efficient for small systems. Second, it allows to deal flexibly with missing observations. Third, it is possible to impose restrictions on the parameters.” [Bańbura, Giannone, Modugno and Reichlin]

9. Factor models have rivals for nowcasting

Factor models are not the only model class used for nowcasting. Bayesian vector autoregression and partial methodologies, such as bridge regressions and MIDAS models are being used as well. While the latter two are computationally much less elegant they can be quite practical, if one wishes to exercise judgment and control to the process or has reason to be suspicious of a fully automated process.

“Another type of model that can be cast in a state space representation is a vector autoregression VAR. Different approaches have been considered to deal with the issue of mixed frequency. One solution…is to specify that the high frequency concepts follow a VAR…and to derive the measurement equation using the (approximate) temporal aggregation relationships between the observed variables and the target variable.” [Bańbura, Giannone, Modugno and Reichlin]

Bayesian vector autoregression (BVAR) offers an alternative modeling framework for nowcasting and monitoring macroeconomic conditions with big data in real time. The basic idea consists in addressing the curse of dimensionality by using a parsimonious naive prior to discipline the estimation of a…densely parameterized and complex model. Vector autoregressions (VARs) are the most general linear model and are widely used in macroeconomics: every variable depends on its own past and on the past of each other variable, and the pattern of correlation of the forecast errors in different variables is left unconstrained. In BVARs, this high level of complexity is combined with a naive prior model that assumes that all the variables are independent white noise or random walks…BVARs are also suitable for nowcasting since they can be cast in a state-space form allowing for conveniently handling data in real time using filtering techniques, in the same way described for the factor model.” [Bok, Caratelli, Giannone, Sbordone, and Tambalotti]

“Methodologies that we label as ‘partialdo not specify a joint model for the variable of interest and for the predictors

  • [In] Bridge equationsthe mixed frequency problem is solved by temporal aggregation of the predictors to the lower frequency. To handle ragged edge auxiliary models, such as ARMA or VAR, are used to forecast the predictors to close the target period of interest. This is the ‘traditional’ now-casting tool, popularly employed at central banks to obtain early estimates of GDP or its components…The now-cast and forecasts of predictors aggregated to the lower frequency…are obtained via [OLS] regression.
  • In a MIDAS type model the predictors are included in the regression at their original observation frequency…MIDAS-type regression implies that the temporal aggregation weights are data driven…Regarding the problem of ragged edge, the solution in this type of approach can be thought of as re-aligning each time series. The time series with missing observations at the end of the sample are shifted forward in order to obtain a balanced data-set with the most recent information…The MIDAS equations suffer from the curse of dimensionality problem and can include only a handful of variables.” [Bańbura, Giannone, Modugno and Reichlin]

10. Nowcasting models can be used for a lot more than nowcasting

Various intermediate products of nowcasting models can be useful for investors. Most obviously underlying latent factors or smoothed versions thereof are often related to fundamental dynamics, such as growth, inflation, earnings, and employment trends. The trends themselves and their changes would be relevant across asset classes. Moreover, distinguishing between trend and transitory (distortion) factors can greatly help the interpretation of economic reports. Also, revisions of nowcasts can indicate changes in the short market environment. Finally, the changing relevance of high-frequency series can be used for calibrating data surprise models.

“The content of a data release includes revisions of the values of the variable in previous releases as well as the value for new reference points not previously available. The nowcast also changes when the model parameter is revised…[Hence] nowcast revision or change can be divided into two effects…[the] effect of new observations and data revisions [and the] effect of parameter revision…Each of these effects can be broken down by variable into contributions from individual variables.” [Hayashi and Tashi]

“We found that the factor structure of the US economy is different between the pre- and post-financial crisis periods. In more recent data the trend of the housing variables is the most important factor in terms of explained variance, while this factor plays a minor role before the financial crisis.” [Dieteren and van Es]

Editor
Editorhttps://research.macrosynergy.com
Ralph Sueppel is managing director for research and trading strategies at Macrosynergy. He has worked in economics and finance since the early 1990s for investment banks, the European Central Bank, and leading hedge funds.