Pre-Testing in a DiD Setup using the did Package

Brantly Callaway and Pedro H.C. Sant'Anna

2022-07-19

Introduction

This vignette provides a discussion of how to conduct pre-tests in DiD setups using the did package

Common Approaches to Pre-Testing in Applications

By far the most common approach to pre-testing in applications is to run an event-study regression. Here, the idea is to run a regression that includes leads and lags of the treatment dummy variable such as \[ Y_{it} = \theta_t + \eta_i + \sum_{l=-\mathcal{T}}^{\mathcal{T}-1} D_{it}^l \mu_l + v_{it} \]

where \(D_{it}^l = 1\) if individual \(i\) has been exposed to the treatment for \(l\) periods in period \(t\), and \(D_{it}^l = 0\) otherwise. To be clear here, it is helpful to give some examples. Suppose individual \(i\) becomes treated in period 3. Then,

And \(\mu_l\) is interpreted as the effect of treatment for different lengths of exposure to the treatment. Typically, \(\mu_{-1}\) is normalized to be equal to 0, and we follow that convention here. It is common to interpret estimated \(\mu_l\)’s with \(l < 0\) as a way to pre-test the parallel trends assumption.

Pitfalls with Event Study Regressions

Best Case Scenario for Pre-Testing

First, let’s start with a case where an event study regression is going to work well for pre-testing the parallel trends assumption

The main thing to notice here:

  • The dynamics are common across all groups. This is the case where an event-study regression will work.

Next, a bit more code

#-----------------------------------------------------------------------------
# modify the dataset a bit so that we can run an event study
#-----------------------------------------------------------------------------

# generate leads and lags of the treatment
Dtl <- sapply(-(time.periods-1):(time.periods-2), function(l) {
    dtl <- 1*( (data$period == data$G + l) & (data$G > 0) )
    dtl
})
Dtl <- as.data.frame(Dtl)
cnames1 <- paste0("Dtmin",(time.periods-1):1)
colnames(Dtl) <- c(cnames1, paste0("Dt",0:(time.periods-2)))
data <- cbind.data.frame(data, Dtl)
row.names(data) <- NULL

head(data)
#>   G        X id period         Y treat Dtmin3 Dtmin2 Dtmin1 Dt0 Dt1 Dt2
#> 1 2 0.530541  1      1  4.264584     1      0      0      1   0   0   0
#> 2 2 0.530541  1      2 10.159789     1      0      0      0   1   0   0
#> 3 2 0.530541  1      3  9.482624     1      0      0      0   0   1   0
#> 4 2 0.530541  1      4 10.288278     1      0      0      0   0   0   1
#> 5 4 1.123550  2      1  3.252148     1      1      0      0   0   0   0
#> 6 4 1.123550  2      2  7.038382     1      0      1      0   0   0   0

#-----------------------------------------------------------------------------
# run the event study regression
#-----------------------------------------------------------------------------

# load plm package
library(plm)

# run event study regression
# normalize effect to be 0 in pre-treatment period
es <- plm(Y ~ Dtmin3 + Dtmin2 + Dt0 + Dt1 + Dt2, 
          data=data, model="within", effect="twoways",
          index=c("id","period"))

summary(es)
#> Twoways effects Within Model
#> 
#> Call:
#> plm(formula = Y ~ Dtmin3 + Dtmin2 + Dt0 + Dt1 + Dt2, data = data, 
#>     effect = "twoways", model = "within", index = c("id", "period"))
#> 
#> Balanced Panel: n = 6988, T = 4, N = 27952
#> 
#> Residuals:
#>       Min.    1st Qu.     Median    3rd Qu.       Max. 
#> -9.8577453 -0.7594999  0.0066022  0.7639995 11.6570179 
#> 
#> Coefficients:
#>        Estimate Std. Error t-value Pr(>|t|)    
#> Dtmin3 0.041851   0.070134  0.5967   0.5507    
#> Dtmin2 0.023539   0.050044  0.4704   0.6381    
#> Dt0    4.012515   0.043824 91.5602   <2e-16 ***
#> Dt1    3.005615   0.054473 55.1767   <2e-16 ***
#> Dt2    2.098672   0.077281 27.1565   <2e-16 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Total Sum of Squares:    82617
#> Residual Sum of Squares: 55795
#> R-Squared:      0.32466
#> Adj. R-Squared: 0.099232
#> F-statistic: 2014.84 on 5 and 20956 DF, p-value: < 2.22e-16

#-----------------------------------------------------------------------------
# make an event study plot
#-----------------------------------------------------------------------------

# some housekeeping for making the plot
# add 0 at event time -1
coefs1 <- coef(es)
ses1 <- sqrt(diag(summary(es)$vcov))
idx.pre <- 1:(time.periods-2)
idx.post <- (time.periods-1):length(coefs1)
coefs <- c(coefs1[idx.pre], 0, coefs1[idx.post])
ses <- c(ses1[idx.pre], 0, ses1[idx.post])
exposure <- -(time.periods-1):(time.periods-2)

cmat <- data.frame(coefs=coefs, ses=ses, exposure=exposure)

library(ggplot2)

ggplot(data=cmat, mapping=aes(y=coefs, x=exposure)) +
  geom_line(linetype="dashed") +
  geom_point() + 
  geom_errorbar(aes(ymin=(coefs-1.96*ses), ymax=(coefs+1.96*ses)), width=0.2) +
  ylim(c(-2,5)) +
  theme_bw()

You will notice that everything looks good here. The pre-test performs well (the caveat to this is that the standard errors are “pointwise” and would be better to have uniform confidence bands though this does not seem to be standard practice in applications).

We can compare this to what happens using the did package:

Overall, everything looks good using either approach. (Just to keep things fair, we report pointwise confidence intervals for group-time average treatment effects, but it is easy to get uniform confidence bands by setting the options bstrap=TRUE, cband=TRUE to the call to att_gt.)

Pitfall: Selective Treatment Timing

Sun and Abraham (2021) point out a major limitation of event study regressions: when there is selective treatment timing the \(\mu_l\) end up being weighted averages of treatment effects across different lengths of exposures.

Selective treatment timing means that individuals in different groups experience systematically different effects of participating in the treatment from individuals in other groups. For example, there would be selective treatment timing if individuals choose to be treated in earlier periods if they tend to experience larger benefits from participating in the treatment. This sort of selective treatment timing is likely to be present in many applications in economics / policy evaluation.

Contrary to event study regressions, pre-tests based on group-time average treatment effects (or based on group-time average treatment effects that are aggregated into an event study plot) are still valid even in the presence of selective treatment timing.

To see this in action, let’s keep the same example as before, but add selective treatment timing.

In contrast to the last case, it is clear that things have gone wrong here. Parallel trends holds in all time periods and for all groups here, but the event study regression incorrectly rejects that parallel trends holds – this is due to the selective treatment timing.

We can compare this to what happens using the did package:

This is the correct performance (up to aforementioned caveats about multiple hypothesis testing).

Conditional Moment Tests

Another main use case for the did package is when the parallel trends assumptions holds after conditioning on some covariates. This is likely to be important in many applications. For example, to evaluate the effect of participating in a job training program on earnings, it is likely to be important to condition on an individual’s education. This would be true if (i) the distribution of education is different for individuals that participate in job training relative to those that don’t (this is very likely to hold as people that participate in job training tend to have less education than those who do not), and (ii) if the path of earnings (absent participating in job training) depends on an individual’s education. See Heckman, Ichimura, and Todd (1998) and Abadie (2005) for more discussion.

Even when one includes covariates to estimate group-time average treatment effects, pre-tests based only on group-time average treatment effects can fail to detect some violations of the parallel trends assumption. To give an example, suppose that the only covariate is binary variable for an individual’s sex. Pre-tests based on group-time average treatment effects could fail to detect violations of the conditional parallel trends assumption in cases where it is violated in one direction for men and in the other direction for women.

The did package contains an additional pre-test for the conditional parallel trends assumption in the conditional_did_pretest function.

# not run (this code can be substantially slower)
reset.sim()
set.seed(1814)
nt <- 1000
nu <- 1000
cdp <- conditional_did_pretest("Y", "period", "id", "G", xformla=~X, data=data)
cdp

References