Local projections (LP) is a popular methodology for the estimation of impulse responses (IR). Compared to the traditional VAR approach, LP allow for more flexible IR estimation by imposing weaker assumptions on the dynamics of the data. The nonparametric nature of LP comes at an efficiency cost, and in practice, the LP estimator may suffer from excessive variability. In this work, we propose an IR estimation methodology based on B-spline smoothing called smooth local projections (SLP). The SLP approach preserves the flexibility of standard LP, can substantially increase precision, and is straightforward to implement. A simulation study shows that SLP can deliver substantial gains in IR estimation over LP. We illustrate our technique by studying the effects of monetary shocks where we highlight how SLP can easily incorporate commonly employed structural identification strategies.
IMPULSE response (IR) functions are a key tool for summ-arizing the dynamic effects of structural shocks on economic time series. While vector autoregressions (VAR) have been traditionally used to identify structural shocks and simultaneously recover the corresponding IRs, the rising popularity of the narrative identification approach popularized an alternative IR estimation approach: the local projections (LP) of Jordá (2005).1
In its basic formulation, the LP approach consists of running a sequence of predictive regressions of a variable of interest on a structural shock for different prediction horizons. The IR is then obtained from the sequence of regression coefficients of the structural shock. This approach has a number of advantages in comparison to VARs: LP does not impose specific dynamics on the variables in the system, does not suffer from the curse of dimensionality inherent to VARs, and can more easily accommodate nonlinearities (Auerbach & Gorodnichenko, 2012). However, in the LP framework, the IR is expensively parameterized and the IR estimator can have a large variability (Ramey, 2012, 2016).
In this work, we introduce an IR estimation methodology, smooth local projections (SLP), that builds on penalized B-splines (Eilers & Marx, 1996). We model the sequence of IR coefficients as a linear combination of B-splines basis functions, and we estimate the coefficients of this linear combination using a shrinkage estimator that shrinks the IR toward a polynomial. SLP nest two important IR estimators. SLP coincide with LP when the degree of shrinkage is low and with an Almon (1965) polynomial distributed lag model when the degree of shrinkage is high. A cross-validation criterion is suggested to choose the degree of shrinkage between these two extremes.
SLP have a number of highlights. First, the methodology can substantially increase the estimation accuracy of LP while preserving flexibility. Second, SLP estimation boils down to standard ridge regression, which is straightforward to implement. Third, SLP, like standard LP, can be used to recover structural IRs in conjunction with a number of identification schemes (e.g., timing restrictions, instrumental variables).
A simulation study is used to assess the finite sample performance of our proposed methodology. Results show that SLP delivers substantial improvements over LP or VARs for a range of DGPs calibrated to real data.
In this work, we focus on using SLP for IR point estimation. In empirical applications, IR confidence intervals are also a natural object of interest, and we propose a procedure for constructing IR confidence intervals using the SLP estimator. We do not study the theoretical properties of such a procedure, but the simulation study shows that our SLP confidence intervals perform similar to LP confidence1 intervals.
Finally, we illustrate our methodology by studying the effects of monetary shocks on GDP growth and inflation. For identification, we use both timing restrictions and an instrumental variable approach using the Romer and Romer (2004) narrative shocks as instrument. While the LP-based IRs can be erratic, the SLP-based IRs are more regular and easier to interpret.
Our paper contributes to a rapidly growing macroeconomic literature that relies on LP to estimate structural impulse responses (Ramey, 2016). LP can be seen as a modern offshoot of the distributed lag (DL) literature (Sims, 1974), and SLP can be seen as a modern version of Shiller's (1973) smoothness priors for DL models. More recently, a number of working papers have proposed related strategies to obtain smoother or regularized estimates of the IR (among others, Barnichon & Matthes, 2018, and Miranda-Agrippino & Ricco, 2017), but our approach has the advantage of being as straightforward to implement as LP. Although not based on B-splines smoothing, a complementary paper is by Plagborg-Møller (2016), who provides methods for optimally selecting the degree of smoothing and constructing confidence bands. Finally, our paper can be cast into the broader context of a rich and growing literature on shrinkage estimation in macroeconometrics (Ingram & Whiteman, 1994; Del-Negro & Schorfheide, 2004; Hansen, 2016a).
A. Smooth Local Projections
Let , , and for from 1 to be stationary time series observed from to . Note that the set of variables may include lagged values of and . We are interested in the estimation of the dynamic multiplier of with respect to a change in for ranging from to , keeping all other variables constant. Typically, is set to either 0 or 1. Also, we define as .
Note that one may also choose to apply the B-splines basis approximation to a subset of the coefficients of equation (1) rather than all of them.
Figure 1 shows the set of B-splines basis functions used throughout this work. B-splines are a basis of hump-shaped functions indexed by a set of knots. A B-spline basis function is made up of polynomial pieces of order . The polynomial pieces join on a set of inner knots and are calibrated in a way such that derivatives up to the order are continuous at the inner knots. The B-spline's basis function is nonzero over the domain spanned by the inner knots and zero elsewhere. The left-most inner knot is used to index the B-spline basis function and the order of the polynomial pieces determines the order of the B-spline basis (i.e., if the polynomial pieces are order , the B-spline basis is said to be of order ). For illustration purposes, figure 1 highlights the B-spline's basis of knot 6, together with the inner knots used to construct this function. In this work, we use a cubic B-splines basis with equidistant knots ranging from to with unitary increments.3
A number of comments on our proposed methodology are in order. Continuing the parallel with the DL literature, consider the case when is mean zero or serially uncorrelated and the set of controls is empty. When the degree of shrinkage is negligible, the SLP estimator of the dynamic multiplier is asymptotically equivalent to the one produced by the unrestricted DL model and standard LP. When the degree of shrinkage is large, the SLP estimator is asymptotically equivalent to the one produced by the polynomial DL model of Almon (1965). By appropriately choosing the amount of penalization, SLP may achieve an optimal balance between these two extremes. SLP can be seen as a modern version of Shiller's (1973) smoothness prior, which was introduced to find a suitable compromise between the unrestricted DL model and Almon's polynomial DL model.8 The appealing feature of our framework is that it retains linearity with respect to the parameters and closed-form estimators are readily available.
While we do not derive formal results on the MSE dominance of the SLP estimator over the standard LP, it is important to give some insight into the limitations of shrinkage estimation. The discussion draws largely on recent results established in Hansen (2016b), where the maximum likelihood estimator (MLE) is compared with a class of shrinkage estimators. First, shrinkage may improve on the average MSE across several parameters, but it will rarely uniformly improve on the MSE for a single parameter. Second, shrinkage works best when the individual parameter estimators are nearly uncorrelated, since the scope for variance reduction through smoothing is smaller when estimators are highly correlated, as could be the case in some applications with persistent data. Finally, even under ideal conditions, somewhat surprising and subtle conditions are often required for shrinkage estimators to MSE dominate the MLE, as exemplified by the famous James-Stein condition.
B. Estimating Structural Impulse Responses
Identification through controls.
In the identification through controls case, the IR can be estimated by running regression (1) with the appropriate set of controls (see Angrist, Jordá, & Kuersteiner, 2018; Jordá & Taylor, 2016). More precisely, if the shock is observed, then the IR can be estimated simply by running equation (1), setting equal to the structural shock (with empty). If the shock is identified as the residual of the regression of an endogenous variable on a set of control variables, then the IR can be estimated by running equation (1), setting equal to the endogenous variable and equal to the set of controls. Note that additional regressors may be included in equation (1) to “mop up” the residual variance. In both settings, the coefficient captures the causal effect of the structural shock and the is given by , which can be estimated as .
Interestingly, the recursive identification scheme put forward by Sims (1980) can be seen as a special case of identification through controls. Sims (1980) proposes timing restrictions between the exogenous shocks of the VAR to disentangle the causal chain of events and identify the structural shocks of interest. In the LP setting, such timing restrictions can be imposed with a specific choice of control variables and . Although known at least since Shapiro and Watson (1988), this point has been relatively underappreciated among LP practitioners.
As an illustration, consider a system comprising output , inflation , and the Fed funds rates . The objective is to estimate the IR of output to a monetary shock to the Fed funds rate. Assuming that the system evolves according to a VAR of order 1 and that the monetary shocks do not affect the other variables on impact, one can recover the IR of output from the LP by setting , , and for ranging from until . Intuitively, we achieve identification by controlling for the contemporaneous values of variables ordered before the shock of interest (in this case, output and inflation).11
Identification through instruments.
Even when a shock or an appropriate set of control variables is not available, it may still be possible to recover the IR through an instrument by running a two-stages least squares regression (Stock & Watson, 2018; Plagborg-Møller & Wolf, 2018). For instance, we may have that the macroeconometrician observes a noisy measurement of the structural shock defined as , where is a conditionally unpredictable measurement error.12 In these cases, an instrument can allow recovering the effect of interest. We define an instrument to be a time-series satisfying and so that is relevant and exogenous. Then the IR can be estimated using two-stages least squares estimation. More precisely, in a first-stage regression, we regress on the instrument . In the second-stage regression, we run equation (1), setting equal to the fitted value of the first-stage regression. Again, the coefficient captures the structural effect of the shock.
To illustrate our instrumental approach, we use our previous monetary example and consider the series of monetary shocks narratively identified by Romer and Romer (2004). A reasonable assumption may be to posit that the Romer and Romer (2004) shocks are a proxy for the true monetary shocks rather than an exact measure, in that they are correlated with the true monetary shocks and uncorrelated with other structural shocks. In that case, the Romer and Romer (2004) shock series satisfies the instrumental variable conditions, and we can recover IR to monetary shocks from (S)LP and two-stages least squares where is instrumented with the Romer and Romer (2004) shocks series . Specifically, in the first stage, we regress on , and in the second stage, we estimate SLP with , where is the fitted value of the first-stage regression.
III. Simulation Study
In order to entail realistic dynamics for the simulations, the parameters of equation (7) are based on the coefficients of the nine structural IRs estimated with LP over 1959Q1–2007Q4. Specifically, we identify the IRs of the structural shocks in equation (7) through controls (see section IIB) by including in the LP regression the appropriate subset of contemporaneous series as well as four lags of all variables in the system. For example, the IRs associated with inflation shocks are identified by setting and .
To assess how the performance of SLP varies with the degree of smoothness of the IR, we consider four sets of simulation in which the multiplier of interest—, the response of GDP growth to a fed funds rate shock—is made increasingly jagged, whereas the other multipliers are kept unchanged at their LP point estimates. To simulate plausible degrees of noise in the IR of interest, we proceed as follows. We construct a “smooth IR” by smoothing the estimates from LP,13 to which we add Gaussian noise at each horizon. As a benchmark, a baseline value for the noise variance is the variance of the difference between the LP IR estimate and its smoothed counterpart. In the first DGP, labeled A, the IR of GDP growth to a Fed funds rate shock is the smooth (noiseless) IR. In DGP (B), the IR is the smooth IR used in DGP (A) plus Gaussian noise with a standard deviation set at one-half its benchmark level. In DGP (C), the IR is set equal to its LP point estimate, so that the noise variance is at its benchmark level. In DGP (D), the IR is the smooth IR used in DGP (A) plus Gaussian noise with a standard deviation set at twice its benchmark level. This set of simulations allows us to study the performance of SLP for IRs with different degrees of smoothness, from a smoother IR–DGP (A)– to a noisier IR–DGP (D)–.
We estimate the IR of to a monetary shock with SLP using timing restrictions consistent with our DGP. A number of details on the implementation of the SLP estimator used in this study are in order. First, we use smooth regularization only on the coefficients associated with the IR of interest, and we do not smooth the coefficients of the control variables.14 Regarding the choice of the penalty matrix, we opt for a naive approach and shrink toward a line (i.e., ), which is roughly consistent with the IR estimated by the standard LP. Finally, the shrinkage parameter is chosen by five-fold cross-validation. For comparison purposes, we also report the estimation results of the Oracle SLP estimator, that is, the SLP estimator estimated using the shrinkage parameter that minimizes the MSE of the IR estimator. The Oracle shrinkage level is determined by simulation. We benchmark our methodology against standard LP estimated by least squares, VAR(4), VAR(12), and a VAR with order chosen via the AIC.
We replicate our simulation exercise for our six-parameter setting using a sample size equal to 50, 100, 200, and 400. The simulation is replicated 1,000 times for each parameter setting and sample size. The performance of each IR estimator is measured by its integrated MSE defined as , which is approximated using the Monte Carlo average across replications.
We use one replication for illustration purposes. The left panel of figure 2 shows the IR estimates based on SLP (based on cross-validation), LP, and VAR(4). Note that despite the population IR being smooth, the LP delivers IR estimates that are quite rough, a well-known feature of LP (Ramey, 2016). We can see that SLP essentially smooths the LP, and in this particular replication, it delivers a more precise estimate of the IR. The right panel of figure 2 shows how the SLP IR estimates change depending on the degree of shrinkage . When is small, the SLP estimate is practically indistinguishable from the regular LP estimate, but as increases, the estimated IR becomes progressively smoother and closer to the target polynomial implied by the choice of the penalty matrix (in this case, a line).
Table 1 reports summary results for the simulation study. The first column contains the MSE of the standard LP, whereas the remaining columns contain the percentage improvements of the alternative estimation methods. Standard LP is typically outperformed by the majority of alternative IR estimation methods. The gains of SLP can be quite substantial, especially when the sample size is small. In addition, and not surprisingly, gains are larger when the population IR is smoother (i.e., for DGPs (A) and (B)). Comparing the performance of the Oracle versus the cross-validated SLP, we see that there are no large differences between using the optimal and selecting a from cross-validation, indicating that for the class of DGPs considered in this study, cross-validation performs satisfactorily. While VAR-based IR estimators can perform well at times, their performances are sensitive to the choice of the number of lags. The VAR(4) does remarkably well when the sample size is small, but the gains relative to SLP deteriorate when the sample size increases. On the contrary, the VAR(12) and VAR with AIC lag selection perform better when the sample size is larger.15
|.||T .||LP .||Ridge/CV .||Ridge/Oracle .||.||.||AIC .|
|.||T .||LP .||Ridge/CV .||Ridge/Oracle .||.||.||AIC .|
The first column reports the MSE of the IR of GDP growth to a monetary policy shocks estimated via LP (based on least squares), while the remaining columns report the percentage improvement (relative to LP) from SLP (based on cross-validated generalized rigde and Oracle generalized ridge) and VAR (using a lag length of 4, 12, and the one determined by the AIC). A positive entry denotes improvement over LP.
Finally, we investigate the properties of the confidence intervals procedure we propose. We simulate DGP (A) and we construct the LP and SLP 90% confidence intervals. The LP confidence intervals are constructed using Newey-West standard errors with a number of lags equal to , whereas the SLP confidence intervals are based on the procedure previously described. Table 2 reports the average length of the confidence interval, as well as the coverage of the interval over 1,000 replications. The simulations show that the LP and SLP confidence interval procedures have similar performances. While the SLP confidence intervals are narrower, they also have slightly smaller coverage. There can be pronounced size distortions for smaller samples, but they become less severe as the sample size increases.
|T .||.||.||2 .||4 .||6 .||8 .||10 .||12 .||14 .||16 .||18 .|
|T .||.||.||2 .||4 .||6 .||8 .||10 .||12 .||14 .||16 .||18 .|
The table reports average length and average coverage of the 90% confidence intervals for LP and SLP for sample size , 100, 200, and 400 and horizons ranging from 2 to 18 (in increments of 2).
IV. Empirical Illustration
In this section we use our proposed methodology to study the effects of monetary shocks on output, which have been the subject of extensive research (see Ramey, 2016, for a review). Here we apply our SLP approach using identification with timing restrictions and IV. In the timing restrictions case, we assume that we can identify the IR of GDP growth to a monetary shock from an SLP of GDP growth on the Fed funds rate using as controls the contemporaneous value of GDP growth and inflation as well as four lags of GDP growth, inflation, and the Fed funds rate. In the IV case, we use the Romer and Romer monetary shocks series as instrument for movements in the Fed funds rate (Romer & Romer, 2004; Coibion, Gorodnichenko, & Silvia, 2012). As controls, we include four lags of GDP growth, inflation, and the Fed funds rate. The sample spans 1966-Q1 to 2007-Q4.
Figure 3 plots the IRs of GDP growth and inflation to a 1 standard deviation monetary shock. The left panel plots the impulse responses obtained from LPs, while the right panel plots the IRs obtained from SLP. Following a contractionary shock, GDP growth declines, as previously found in numerous studies. However, the IRs obtained by regular LP can be erratic, with sometimes sharp fluctuations from quarter to quarter. This makes the interpretation of certain features of the IR difficult, since it is not clear whether these movements are real features of the IR or just artifacts of noisy measurements (e.g., Ramey, 2012). In contrast, thanks to smoothing, the SLP IRs are easier to interpret.
This paper proposes a novel IR estimation approach based on penalized B-splines called smooth local projections (SLP). The SLP approach preserves the flexibility of standard LP but can substantially increase precision. Moreover, SLP estimation boils down to standard ridge regression. A simulation study is used to illustrate the performance of SLP for IR estimation, and we find that SLP can deliver substantial improvements over LP. As with LP, SLP can be easily used with common identification schemes to directly estimate structural IRs. We illustrate our approach by studying the effects of monetary shocks with different identification schemes.
Note that the size of the vector is not fixed, and it ranges from 1 (for ) to (assuming ).
Note that the errors of equation (4) are overlapping multistep forecast errors that typically exhibit substantial serial correlation. A GLS-type shrinkage estimator may improve the MSE performance, but we leave this for future research.
The matrix is the matrix such that for a vector we have that .
Note that further shape constraints can be also implemented. Notably, for stationary series, one may additionally impose that is close to 0 at large enough horizons. This can be easily implemented by shrinking toward 0 (instead of its th difference) for large enough.
In fact, our shrinkage estimation approach has a Bayesian interpretation. The sum of squared residuals term in equation (5) can be interpreted as a log likelihood, whereas the penalty term can be thought of as the log density of a Gaussian prior. Thus, the SLP estimator can be thought of as the maximizer of the posterior of the model parameters. Note that from this perspective, we can interpret the penalty matrix as a shape prior.
One of the challenges in the construction of confidence intervals in this context lies in the fact that typically the distribution of shrinkage estimators has a nonnegligible bias, which is a function of the shrinking parameter. Constructing the confidence interval using an undersmoothed estimator of reduces the extent of such bias. See also Härdle (1990).
We follow the definition of Ramey (2016) and define a structural shock as a variable (a) that is exogenous with respect to the other current lagged endogenous variables in the system, (b) is uncorrelated with other exogenous shocks, and (c) represents either unanticipated movements in exogenous variables or news about future movements in exogenous variables (see also Blanchard & Watson, 1986; Bernanke, 1986, and Stock & Watson, 2016).
This strategy effectively amounts to identifying monetary shocks from the residuals of a Taylor rule with output growth and inflation (and their lags).
Let us emphasize that we assume to be unpredictable given past information. If the measurement error is correlated with past shocks, then identification becomes more involved.
The smooth IR is obtained by regressing the LP IR estimates on a sine/cosine basis, and then using the fitted values of the regression as the smooth IR. We use a different smoothing method than B-splines in order not to mechanically bias results in our favor.
This allows us to more easily compare the LP and SLP estimators but makes the exercise more disadvantageous for our SLP methodology as further efficiency gains may be attained by using regularization more extensively.
On average the AIC tends to select large VAR orders across the different parameter settings.
We thank Majid Al Sadoon, Mila Cheng, Jordi Galí, Felix Geiger, Oscar Jordà, Dennis Kristensen, Jaime Martínez-Martín, Barbara Rossi, Arthur Taburet, Andrea Tamoni, and seminar participants for helpful comments. C.B. acknowledges financial support from the Spanish Ministry of Science and Technology (grant MTM2015-67304-P), the Spanish Ministry of Economy and Competitiveness, through the Severo Ochoa Programme for Centres of Excellence in R&D (SEV-2011-0075), and Fundación BBVA scientific research grant (PR16_DAT_0043) on analysis of big data in economics and financial applications. The views expressed here do not necessarily reflect those of the Federal Reserve Bank of San Francisco or the Federal Reserve System. Any errors are our own. Matlab and R implementations of the procedures presented in this paper are available from the authors upon request to the authors.