What works best and when: Accounting for multiple sources of pureselection bias in program evaluations

Haeil Jung, Maureen A. Pirog

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Most evaluations are still quasi-experimental and most recent quasi-experimental methodological research has focused on various types of propensity score matching to minimize conventional selection bias on observables. Although these methods create better-matched treatment and comparison groups on observables, the issue of selection on unobservables still looms large. Thus, in the absence of being able to run randomized controlled trials (RCTs) or natural experiments, it is important to understand how well different regression-based estimators perform in terms of minimizing pure selection bias, that is, selection on unobservables. We examine the relative magnitudes of three sources of pure selection bias: heterogeneous response bias, time-invariant individual heterogeneity (fixed effects [FEs]), and intertemporal dependence (autoregressive process of order one [AR(1)]). Because the relative magnitude of each source of pure selection bias may vary in different policy contexts, it is important to understand how well different regression-based estimators handle each source of selection bias. Expanding simulations that have their origins in the work of Heckman, LaLonde, and Smith (), we find that difference-in-differences (DID) using equidistant pre- and postperiods and FEs estimators are less biased and have smaller standard errors in estimating the Treatment on the Treated (TT) than other regression-based estimators. Our data analysis using the Job Training Partnership Act (JTPA) program replicates our simulation findings in estimating the TT.

Original languageEnglish
Pages (from-to)752-777
Number of pages26
JournalJournal of Policy Analysis and Management
Volume33
Issue number3
DOIs
Publication statusPublished - 2014 Jan 1
Externally publishedYes

Fingerprint

trend
evaluation
regression
development of methods
simulation
Program evaluation
Selection bias
Estimator
data analysis
act
experiment
Fixed effects
Simulation
Group
Difference-in-differences
Autoregressive process
Natural experiment
Evaluation
Standard error
Loom

ASJC Scopus subject areas

  • Business, Management and Accounting(all)
  • Sociology and Political Science
  • Public Administration

Cite this

What works best and when : Accounting for multiple sources of pureselection bias in program evaluations. / Jung, Haeil; Pirog, Maureen A.

In: Journal of Policy Analysis and Management, Vol. 33, No. 3, 01.01.2014, p. 752-777.

Research output: Contribution to journalArticle

@article{17c5599805884dff810302d1fcd0f357,
title = "What works best and when: Accounting for multiple sources of pureselection bias in program evaluations",
abstract = "Most evaluations are still quasi-experimental and most recent quasi-experimental methodological research has focused on various types of propensity score matching to minimize conventional selection bias on observables. Although these methods create better-matched treatment and comparison groups on observables, the issue of selection on unobservables still looms large. Thus, in the absence of being able to run randomized controlled trials (RCTs) or natural experiments, it is important to understand how well different regression-based estimators perform in terms of minimizing pure selection bias, that is, selection on unobservables. We examine the relative magnitudes of three sources of pure selection bias: heterogeneous response bias, time-invariant individual heterogeneity (fixed effects [FEs]), and intertemporal dependence (autoregressive process of order one [AR(1)]). Because the relative magnitude of each source of pure selection bias may vary in different policy contexts, it is important to understand how well different regression-based estimators handle each source of selection bias. Expanding simulations that have their origins in the work of Heckman, LaLonde, and Smith (), we find that difference-in-differences (DID) using equidistant pre- and postperiods and FEs estimators are less biased and have smaller standard errors in estimating the Treatment on the Treated (TT) than other regression-based estimators. Our data analysis using the Job Training Partnership Act (JTPA) program replicates our simulation findings in estimating the TT.",
author = "Haeil Jung and Pirog, {Maureen A.}",
year = "2014",
month = "1",
day = "1",
doi = "10.1002/pam.21764",
language = "English",
volume = "33",
pages = "752--777",
journal = "Journal of Policy Analysis and Management",
issn = "0276-8739",
publisher = "Wiley-Liss Inc.",
number = "3",

}

TY - JOUR

T1 - What works best and when

T2 - Accounting for multiple sources of pureselection bias in program evaluations

AU - Jung, Haeil

AU - Pirog, Maureen A.

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Most evaluations are still quasi-experimental and most recent quasi-experimental methodological research has focused on various types of propensity score matching to minimize conventional selection bias on observables. Although these methods create better-matched treatment and comparison groups on observables, the issue of selection on unobservables still looms large. Thus, in the absence of being able to run randomized controlled trials (RCTs) or natural experiments, it is important to understand how well different regression-based estimators perform in terms of minimizing pure selection bias, that is, selection on unobservables. We examine the relative magnitudes of three sources of pure selection bias: heterogeneous response bias, time-invariant individual heterogeneity (fixed effects [FEs]), and intertemporal dependence (autoregressive process of order one [AR(1)]). Because the relative magnitude of each source of pure selection bias may vary in different policy contexts, it is important to understand how well different regression-based estimators handle each source of selection bias. Expanding simulations that have their origins in the work of Heckman, LaLonde, and Smith (), we find that difference-in-differences (DID) using equidistant pre- and postperiods and FEs estimators are less biased and have smaller standard errors in estimating the Treatment on the Treated (TT) than other regression-based estimators. Our data analysis using the Job Training Partnership Act (JTPA) program replicates our simulation findings in estimating the TT.

AB - Most evaluations are still quasi-experimental and most recent quasi-experimental methodological research has focused on various types of propensity score matching to minimize conventional selection bias on observables. Although these methods create better-matched treatment and comparison groups on observables, the issue of selection on unobservables still looms large. Thus, in the absence of being able to run randomized controlled trials (RCTs) or natural experiments, it is important to understand how well different regression-based estimators perform in terms of minimizing pure selection bias, that is, selection on unobservables. We examine the relative magnitudes of three sources of pure selection bias: heterogeneous response bias, time-invariant individual heterogeneity (fixed effects [FEs]), and intertemporal dependence (autoregressive process of order one [AR(1)]). Because the relative magnitude of each source of pure selection bias may vary in different policy contexts, it is important to understand how well different regression-based estimators handle each source of selection bias. Expanding simulations that have their origins in the work of Heckman, LaLonde, and Smith (), we find that difference-in-differences (DID) using equidistant pre- and postperiods and FEs estimators are less biased and have smaller standard errors in estimating the Treatment on the Treated (TT) than other regression-based estimators. Our data analysis using the Job Training Partnership Act (JTPA) program replicates our simulation findings in estimating the TT.

UR - http://www.scopus.com/inward/record.url?scp=84902540275&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84902540275&partnerID=8YFLogxK

U2 - 10.1002/pam.21764

DO - 10.1002/pam.21764

M3 - Article

AN - SCOPUS:84902540275

VL - 33

SP - 752

EP - 777

JO - Journal of Policy Analysis and Management

JF - Journal of Policy Analysis and Management

SN - 0276-8739

IS - 3

ER -