/
Difference-in-Difference Evaluation Methods Difference-in-Difference Evaluation Methods

Difference-in-Difference Evaluation Methods - PowerPoint Presentation

jane-oiler
jane-oiler . @jane-oiler
Follow
489 views
Uploaded On 2016-08-06

Difference-in-Difference Evaluation Methods - PPT Presentation

MAAC 2015 Fall Conference Turf Valley Ellicott City Maryland November 5 2015 Todd Caldis JD PhD Senior Economist CMSOACT Evaluation problems arise from interventions that are perceived as likely to involve a desired effect on the outcome of an activity or process ID: 436109

variable effects large regression effects variable regression large difference evaluation variables average model bba intervention models cut cut0 restaurant fte post period

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Difference-in-Difference Evaluation Meth..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Difference-in-Difference Evaluation Methods

MAAC 2015 Fall ConferenceTurf ValleyEllicott City, MarylandNovember 5, 2015

Todd Caldis, J.D., Ph.D.

Senior Economist

CMS/OACTSlide2

Evaluation problems arise from interventions that are perceived as likely to involve a desired effect on the outcome of an activity or process

An identifiable status quo, how things operated pre-interventionAn identifiable intervention, departure from the status quo: A new surgical procedure, a new drug therapy, a new insurance reimbursement policy, an adjustment to an existing social policy like the minimum wageIn all cases a need to evaluate quantitatively the causal impact of the intervention

The Evaluation ProblemSlide3

Snow’s 1855 study of cholera causation in London

In 1849 London serviced by 2 water companies that both drew water from the Thames river in central LondonIn 1852 one of the companies moved its water works upriver to an area freer of sewageResearchers showed that districts supplied by the upriver company post-intervention had sharply lower death rates from cholera relative to districts that continued to be supplied from the remaining downriver company

An Early Pioneering Example

3Slide4

Evaluation Methods Today

Basic methods developed by econometricians, statisticians, and bio-statisticiansTypically involve:Collection of data for an intervention on a pre and post basisSpecifying and estimating a regression equation with coefficients intended to measure intervention effectsAnalysis of model resultsSlide5

Session Goals

Specification of regression equations to measure intervention/treatment effectsSimple Linear Regression and why it fails to get at the problemFixed effects and dummy variable models that introduce important building blocks for an evaluation modelTrue difference-in-difference regression models

Random assignment models

Along the way will look at results of a couple estimated models

Brief consideration of some methodological problems and possible fixes

5Slide6

or

y

is a vector of measured outcomes;

x

is matrix of measurements on explanatory variables for each member of the sample population;

β

is a vector of coefficients to be estimated;

ε

is random error term

Estimated easily an efficiently by OLS by many statistical packages with ability to analyze statistical significance of estimates and make forecasts

Danger of omitted variable and confounding variable bias, in principle fixable

 

Simple Linear Regression

6Slide7

d

is a dummy or indicator

variable, coded 0 or

1 (to indicate ‘M’ or ‘F’, ‘Union’ or ‘Non-Union’, etc.

Allows modeling of effects of non-continuous, discrete variables; can include many such variables in a model

Estimated coefficient on an indicator variable is interpreted as an ‘average effect’

A building block to get where we want to go, but still not a model to measure intervention.

 

Dummy/Indicator Variables in Linear Regression

7Slide8

Why dummy variable alone generally can’t get at treatment effects?

A single dummy variable in isolation does not address two key factors critical to determining whether there is a policy effect or not:1. The trend in the variable of interest prior to the policy intervention

2

. The trend in the variable of interest

among

those NOT subject to the policy

intervention

8Slide9

Estimates average invariant effects for each observational unit or each time period or both

Requires repeated measurements on same observational units either within the same time period or in different time periods; panel or TSCS data sets

Interpretation of subscript notation:

i

is person or observational and

t

is time period

Coding of variables to estimate the fixed effects; LSDV estimation methods when not interested in the fixed effects themselves

 

Fixed Effects Models I: Specification

9Slide10

Fixed Effects Models II: Their Uses

Allow researchers to account for unobserved average effects attributable to unobserved factors unique to each observational unit or unique to each time periodPartial answer to ‘implicit’ omitted variable problem inherent in even the simplest linear regression modelAs Mundlak (1978) shows in a famous paper a partial fix for

endogeneity

bias when IV estimation is infeasible

A building block for evaluation modeling, but still not an evaluation model. Why?

10Slide11

Card and Krueger’s 1994 study of change in fast food employment after NJ raised its minimum wage to $5.05 from $4.25 in February, 1993

Used the adjacent state of PA whose minimum wage remained unchanged as the control groupOnly computed means (regression unneeded) with no explanatory variables:

Footnote: How Snow was able to implement his pioneering DD model

 

Difference-in Difference: A simple empirical example

11Slide12

DD Simple Minimum Wage Model

12

Variable

NJ

PA

Difference: NJ - PA

Average FTE Employment per Restaurant

after

21.03

(

21.17

(

)

-0.14(

Average FTE Employment per Restaurant

before

20.44

(

23.33

(

-2.89

(

)

Change in Average per Restaurant FTE 0.59(-2.16()2.76

(

Variable

NJ

PA

Difference: NJ - PA

Average FTE Employment per Restaurant

after

Average FTE Employment per Restaurant

before

Change in Average

per Restaurant FTE Slide13

DD in Regression: The More Typical Situation

Quite similar to model without explanatory variables

Compute effects as before using coefficients, ignoring coefficients on explanatory variables

 

13Slide14

Think back to the DD result of a 2.76 per restaurant increase in average FTE employment. Does it make sense?

In terms of economic theory no!Omission of explanatory variables may be a factorMore importantly trends PA and NJ may not truly be parallel universes, trends that are being differenced may not actually be the same across the 2 states (the deltas)Whenever construct a DD model the similarity of the comparison groups in every respect except treatment is key.

Picking Comparison Groups

14Slide15

Models we have examined are often called quasi-experiments because they are not true experiments, but an attempt to get at the effects we might ideally study with experimental methods, generally too costly and time consuming

Experimental methods randomly assign study populations into treatment and control groups in order to assure that there are no systematic differences in the two groups other than the treatmentTreatment effects can then be estimated using what are essentially simple dummy variable models

Sidenote

: Random Assignment/ Experimental Models

15Slide16

Evaluation of Effects of Medicare Payment Cuts on Hospital Performance

Large-Scale DD Example16Slide17

Economic theory predicts that payment cuts should have ‘real world’ effects on quantity or quality of what firms provide to consumers

Balanced Budget Act of 1997 imposed a variety of cuts in Medicare payment on acute hospitalsDSH, Medical Education, OutliersLikely to differentially effect hospitals depending on nature and scope of their operations and their relative dependence on the Medicare program as a source of patients“Natural Experiment” for evaluating the effects of payment cuts on Medicare patientsACA mandates reduction of annual updates for most Medicare FFS payment systems below market basket

Theoretically the equivalent of an annual payment cut

The Evaluation Issue and Its Relevance

17Slide18

1. Develop a method for identifying ‘large-cut,’ ‘moderate-cut,’ and ‘small-cut’ hospitals.

---A key step because of issues of endogeneity bias---A story for another day2. Use the hospital identifiers along with other relevant variables in a DD regression equation to predict patient outcomes such as mortality.

Evaluation Strategy

(OACT Contractors Vivian Wu of USC and Yu-Chu Shen of the Naval Postgraduate School)

18Slide19

Illustrating the DD Evaluation‘Goal

’(A ‘made-up’ example)

Changes in 30-day Mortality

Small-Cut Hospitals

(Control Group)

Large-Cut Hospitals (Treatment Group)

Difference-in-Differences (DD)

 

1995-1997

(pre-BBA)

-3%

-3%

-3-(-3)= -0%

2001-2005 (post-BBA)-7%-4%-4-(-7)= 3%

Difference-in-difference-in-differences (DDD)

(3-0)= +3%

19Slide20

The DD Regression Equation

(Simplified)

 

20Slide21

Estimation Results I

One-year

mortality

Initial BBA period (1998-2000)

AMI

 

CHF

 

Stroke

 

moderate cut

-0.19

[-0.71,0.34]

0.35[-0.09,0.78]0.48+

[-0.01,0.97]

large cut

-0.04

[-0.64,0.55]

0.48+

[-0.02,0.99]

0.55+

[-0.04,1.14]

Post BBA period (2001-2005)

moderate cut

0.43[-0.14,1.00]0.43+[-0.04,0.89]0.31[-0.19,0.81]large cut1.22**[0.54,1.91]0.74**[0.20,1.28]0.55+[-0.05,1.15]Extended Post-BBA (2006-2009)moderate cut

0.63+

[-0.01,1.27]

0.82**

[0.25,1.38]

0.55+

[-0.05,1.14]

large cut

1.17**

[0.38,1.95]

0.68*

[0.01,1.35]

0.59+

[-0.10,1.28]

N (discharges)

2,131,042

3,351,128

2,272,315

21Slide22

Estimation Results II

1-year mortality

Initial BBA period (1998-2000)

Pneumonia

Hip Fracture

moderate cut

0.21

[-0.24,0.65]

0.10

[-0.38,0.59]

large cut

0.56*

[0.03,1.09]

0.04[-0.52,0.59]Post BBA period (2001-2005)

moderate cut

0.43+

[-0.05,0.91]

0.39

[-0.09,0.87]

large cut

1.10**

[0.52,1.67]

0.56+

[-0.02,1.15]

Extended Post-BBA (2006-2009)moderate cut0.43[-0.12,0.99]0.56*[0.02,1.10]large cut0.07[-0.59,0.74]0.81*[0.18,1.44]N (discharges)3,388,6881,688,16522Slide23

Charted Results(An Example---Much more

avaialable)

23Slide24

Takeaways from Hospital Price Cut Study

DD a versatile technique for uncovering ‘treatment’ effects buried in large data setsRegression coefficient estimates and their significance are the core of the storyPresentational options exist for making

DD

effects easier to view

24Slide25

Where we have come

Why and how the logit regression model is interpreted as a probability model about discrete choices (logit as odds ratio)Where coefficient estimates come from, max of nonlinear likelihood function

K

inds of analytics that can be done with

logit

models

Looked simple example to

ullustrate

extension of the method to more than 2 choices in multinomial

logit

models

Ready to learn more!Slide26

References

Joshua D. Angrist and Jorn-Steffen Pischke

,

Mostly Harmless Econometrics

, Princeton University Press, 2009.

Colin Cameron and

Pravin

K. Trivedi,

Microeconometrics

: Methods and Applications, Cambridge University Press

, 2006.

Vivian

Y. Wu and Yu-Chu

Shen, “Reductions in Medicare Payments and Patient Outcomes,” Medical Care Vol. 51, No. 11, pp. 970-977 (Nov., 2013)Jeffrey M. Wooldridge, Econometric Analysis of Cross Section and Panel Data, The MIT Press, 2002.26Slide27

Thank You to Colleagues

27This presentation also drew upon an earlier presentation about DD in the work of CMS\OACT presented jointly with Andrea Sisko and Matt Rader of CMS\OACT in Spring, 2015.Slide28

Contact Information

todd.caldis@cms.hhs.gov