CHPR290/Stat266-- Course Files, Readings, Examples


Week 1--Course Introduction, Potential Outcomes and Study Design
In the news
Fish-- makes your babies smart, but fat (regression analysis says).   Music: Fishin' Blues
1.   High fish consumption in pregnancy tied to brain benefits for kids   Publication: Maternal Consumption of Seafood in Pregnancy and Child Neuropsychological Development: A Longitudinal Study Based on a Population With High Consumption Levels. American Journal of Epidemiology (2016) Vol 183(3), 169-182.
2.  Eating lots of fish in pregnancy linked to obesity risk for kids    Publication: Fish Intake in Pregnancy and Child Growth: A Pooled Analysis of 15 European and US Birth Cohorts.  JAMA Pediatrics. Published online February 15, 2016. doi:10.1001/jamapediatrics.2015.4430
    There's more. From the publication:
     "To select the confounders for adjustment in multivariable models, we used a directed acyclic graph approach based on prior knowledge about parental and child covariates that may be related to child adiposity and/or fish intake in pregnancy. We constructed the graph using DAGitty version 2.1 (DAGitty) to identify minimally sufficient adjustment sets of covariates and chose the set on which we had the best available information"
   DAGitty resources. Drawing and Analyzing Causal DAGs with DAGitty   Main website: DAGitty -- drawing and analyzing causal diagrams (DAGs)

Lecture Topics             Lecture 1 slide deck
1. Course outline and logistics
2. Potential outcomes framework (DOS 2.2)
3. Study design versus inference
4. Fisher's sharp null; permutation test (DOS 2.3)

Text Readings
Rosenbaum DOS: Chapter 2 (secs 2.1 - 2.3);  Chapter 1 (esp secs 1.1, 1.2, 1.7)
Additional Resources
1. Donald B. Rubin
   For objective causal inference, design trumps analysis Annals of Applied Statistics, Volume 2, Number 3 (2008), 808-840.    Rubin talk .   Another Rubin overview of matching: Matching Methods for Causal Inference Stuart, E.A. and Rubin, D.B. (2007). Best Practices in Quasi-Experimental Designs: Matching methods for causal inference. Chapter 11 (pp. 155-176) in Best Practices in Quantitative Social Science. J. Osborne (Ed.). Thousand Oaks, CA: Sage Publications.
2. Paul Holland
Statistics and Causal Inference, Paul W. Holland, JASA Vol 81 Dec. 1986, pp.945-960,    another link
Commentaries Donald Rubin, David Cox

Computing Corner: Extended Data Analysis Examples
Lalonde NSW data (DOS sec 2.1). Subclassification/Stratification and Full matching.
   Week 1 handout       Rogosa R-session        pdf slides shown in class


Week 1 Review Questions
From Computing Corner
1. In Week 1 Computing Corner with the Lalonde data (effect of job training on earnings), we started out (see R-session) by showing the Week 1 in the news analyses (fish) of analysis of covariance, tossing the treatment variable and all the confounders into a regression equation predicting outcome and hoping for the best. Compare that ancova with an ancova the uses just the significant predictors. Also compare with an ancova which uses the single available covariate/confounder having the highest correlation with outcome. Are these analyses consistent?       
 Solution for Review Question 1
2. In DOS Sec 2.1, Rosenbaum works with the randomized experiment data from NSW. In Week 1 Computing Corner we used the constructed observational study version of these data. Use the observational study data to do a version of the 1:1 matching in DOS section 2.1. Compare the balance improvement achieved from nearest neighbor matching with the full matching results in Computing Corner Week 1.       
 Solution for Review Question 2
3. For the fullmatch analysis done in the class presentation, the outcome comparison was carried out using lmer to average the treatment effects over the 104 subclasses. A hand-wavy analogy to the paired t-test here would be to use the mean difference within each subclass. Show that (because some of the subclasses are large) this simplified analysis doesn't well replicate the lmer results.       
 Solution for Review Question 3
From Week 1 Lecture
4.  Modify Fisher's Sharp Null to reflect the null hypothesis that the treatment adds five units to the outcome under control. Build a small simulation (e.g., 10 observations) and construct a table that summarizes the potential outcomes. Randomize using a fair coin flip to assign treatment or control for each observational unit. Use the permutation test to assess your data set using (i) Fisher's Sharp Null and (ii) the null hypothesis that the treatment adds five units to the outcome under control.       
 Solution for Review Question 4

5.   Building off of RQ#4 above, sort your observations so they are in ascending order based on the outcome under control. Randomize two at a time: one fair coin flip now assigns either the first or second observation to treatment (and the other to control). A second fair coin flip assigns either the third or the fourth observation to treatment (and the other to control). This continues so on and so forth. Use the appropriate permutation test to assess your data set using (i) Fisher's Sharp Null and (ii) the null hypothesis that the treatment adds five units to the outcome under control. Contrast the results here with the results from RQ#4.       
 Solution for Review Question 5



Week 2-- Causal Inference in Randomized Experiments and Models for Observational Studies

In the news      Statistics is the only friend you need; OR moderating variables can be your friend           music: I've got friends in low places
Wash Post: Why smart people are better off with fewer friends . Publication: Country roads, take me home... to my friends: How intelligence, population density, and friendship affect modern happiness.   British Journal of Psychology 2016

Lecture Topics                         Lecture 2  slide deck
Why randomized controlled studies produce high-quality data (DOS, Sections 15.1 & 15.4 [skim the sections in between] also Holland paper)
Techniques for analysis (DOS, Sections 2.3-2.4)
Why randomized controlled studies do not produce high-quality data (DOS, Section 2.6)
First model for observational studies (DOS, Sections 3.1-3.3)

Computing Corner: Extended Data Analysis Examples
Lindner data, Percutaneous Coronary Intervention with 'evidence based medicine'.
Percutaneous coronary intervention (PCI), commonly known as coronary angioplasty or simply angioplasty, is a non-surgical procedure used to treat the stenotic (narrowed) coronary arteries of the heart found in coronary heart disease.
In package PSAgraphics Vignette JSS   PSAgraphics: An R Package to Support Propensity Score Analysis  Journal of Statistical Software February 2009, Volume 29, Issue 6. http://www.jstatsoft.org/
   Week 2 handout       Rogosa R-session        pdf slides shown in class

Week 2 Review Questions
From Week 2 Lecture
1. Morton et al. (1982) studied lead in the blood of children whose parents worked in a factory where lead was used in making batteries. They were concerned that children were exposed to lead inadvertently brought home by their parents. Their study included 33 such children from different families -- they are the exposed or treated children. The outcome R was the level of lead found in a child's blood in mcg/dl of whole blood. The covariate x was two-dimensional, recording age and neighborhood of residence. They matched each exposed child to one control child of the same age and neighborhood whose parents were employed in other industries not using lead. The data file data_lead.csv in the class directory shows the levels of lead found in the children's blood in mcg/dl of whole blood. If this study were free of hidden bias, which may or may not be the case, we would be justified in analyzing these data using methods for a uniform randomized experiment with 33 matched pairs. Perform the following: (i) a one-sided test of the null of no treatment effect using the Wilcoxon signed rank test, do not use a shortcut function, (ii) the Hodges-Lehmann estimate of the size of an additive effect, (iii) the associated 95% confidence interval, and (iv) confirm i-iii using wilcoxon.test() or similar function. Be sure to interpret your results.

2. Using the same set up as in RQ#1, perform a one-sided test of the null of no treatment effect using a permutation test that uses the mean of the within-pair difference as a test statistic. Contrast with RQ#1. Give ii and iii a try.       
 Solution for Week 2 Review Questions



Week 3-- Matching Methods and Implementations

In the news               Music: Weed, whites, and wine (aka Willin')
    Daily Mail: The more cannabis you smoke, the more likely you are to be a loser, finds international study      U.C. Davis: Is cannabis "safer" than alcohol? Not from an economic or social point-of-view, UC Davis study finds          Cannabis vs. alcohol: economic and social impacts
Publication: Persistent Cannabis Dependence and Alcohol Dependence Represent Risks for Midlife Economic and Social Problems A Longitudinal Cohort Study. Magdalena Cerda, Clinical Psychological Science March 22, 2016
But ........ from Week 3 Stat209
Wash Post:   Scientists have found that smoking weed does not make you stupid after all    Publication:   Are IQ and educational outcomes in teenagers related to their cannabis use? A prospective cohort study  J Psychopharmacology January 6, 2016 .

Lecture Topics                          Lecture 3  slide deck   
A matched observational study (DOS, Chap 7)
Basic tools of multivariate matching (DOS, Secs 8.1-8.4)
Various practical issues in matching (DOS, Chap 9)

Computing Corner: Extended Data Analysis Examples
Lindner data, Percutaneous Coronary Intervention with 'evidence based medicine' deferred from week 2 (see links).
Additional topic: Alternative propensity score analyses. Propensity score weighting: Inverse Probability of Treatment Weighting (IPTW).
Rogosa session with Lindner data
Also, a thorough R exposition using the Lalonde data   A Practical Guide for Using Propensity Score Weighting in R Practical Assessment, Research & Evaluation, v20 n13 Jun 2015.
    and    another exposition, comparison with full matching
     Boosted regression estimation of propensity:   twang package from RAND, tutorials and resources.

Week 3 Review Questions
From Weeks 2 and 3 Computing Corner
1. The JSS vignette for PSAgraphics (linked week 2 ComCo) does subclassification matching for Lindner data. Repeat their subclassification analyses and try out their balance displays and tests.       
Lindner data  package PSAgraphics Vignette JSS           outcome analysis, Rogosa session

2. The Week 3 presentation (part of Week 2 session) introduced the alternative analysis-- analysis of covariance with propensity score as covariate. A rough analogy is to ancova vs blocking. Try out the basic (here logistic regression) ancova approach for the lifepres dichotomous outcome       
 Solution for Review Question 2

3. Do the full matching analysis with the Lindner data using (raw) optmatch rather than the matchit wrapper that calls optmatch. Extra: Try out some of the diagnostics in RItools package for optmatch objects.



Week 4-- Further issues: Design Sensitivity

In the news                   Music: Smoke Smoke Smoke (That Cigarette)
Cigarette smoking could burn your job prospects     Smokers struggle to find jobs, get paid less    Publication: Likelihood of Unemployed Smokers vs Nonsmokers Attaining Reemployment in a One-Year Observational Study  JAMA Internal Medicine April 2016.
Technical reference: Dealing with limited overlap in estimation of average treatment effects    Biometrika (2009) 96 (1): 187-199.

Lecture Topics                          Lecture 4  slide deck   
Finish up: Various practical issues in matching (DOS Chap 9)
Sensitivity analysis (DOS Sections 3.4-3.7 and 3.9)
Designs to strengthen your analysis: multiple control groups, "known" effects (DOS Chap 6)

Computing Corner: Extended Data Analysis Examples
          Propensity score analyses without matching. Ancova, IPTW; Propensity score weighting: Inverse Probability of Treatment Weighting (IPTW)
Alternative estimation of propensity scores (boosted regression, recursive partitioning and regression trees).
                Rogosa session with Lindner data (warts and all)
Resources:
Package rpart .     An Introduction to Recursive Partitioning Using the RPART Routines
Package gbm .     Generalized Boosted Models: A guide to the gbm package


Week 4 Review Questions
From Weeks (3 and) 4 Computing Corner
1. Try out the ATE IPTW analysis (done in week3 computing corner) for the dichotomous outcome lifepres in the Lindner data. Compare with full matching results shown in class.       
 Solution for Review Question 1

2. Try out, using the Lalonde data (Week 1), the boosted regression approach to computing propensity scores using Ridgeway's (via Friedman) gbm package. Is this more successful the the Lindner data attempt shown in week 4 Computing Corner? Are the balance and overlap results improved compared to the logistic regression estimation shown in Week 1?       
 Solution for Review Question 2




Week 5-- Further issues: Planning Analysis

In the news                   Music: Love and Marriage
Marriage and Cancer survival.    Marriage may help fight cancer     Marriage is good for cancer patients
Publication: Martinez, M. E., Anderson, K., Murphy, J. D., Hurley, S., Canchola, A. J., Keegan, T. H. M., Cheng, I., Clarke, C. A., Glaser, S. L. and Gomez, S. L. (2016),   Differences in marital status and mortality by race/ethnicity and nativity among California cancer patients.   Cancer. doi: 10.1002/cncr.29886

Lecture Topics                         Lecture 5  slide deck   
Example of matching using the smoking and jobs data set from JAMA (in-the-news Week 4).
Using a second control group (mitigating bias)
Using multiple outcomes (coherence and known null effects)

Computing Corner: Extended Data Analysis Examples
           Toolkit for Weighting and Analysis of Nonequivalent Groups: A tutorial for the twang package   Lalonde data, yet again.
                Rogosa twang and ATT session with Lalonde data         Week 5 slides
To come, sensitivity analysis computions: package rbounds, Rosenbaum pacakges sensivitymv and sensitivitymw   vignette:   Two R Packages for Sensitivity Analysis in Observational Studies

Week 5 Review Questions
From Week 5 Computing Corner
1. Try an ATT IPTW analysis for log(cardbill) outcome in the Lindner data.       
 Solution for Review Question 1




Week 6-- Instrumental Variable Methods

In the news                   Music: Don't fence me in   and    Green Acres
Living Near Green Spaces Helps You Live Longer, New Study Shows     Why living around nature could make you live longer      Publication: Exposure to Greenness and Mortality in a Nationwide Prospective Cohort Study of Women  Environ Health Perspect; DOI:10.1289/ehp.1510363
Mediation analysis (c.f.stat209 weeks 2,3). R packages mediation    powerMediation     Gelman on mediation
slides for itn_6


Lecture Topics                                             Lecture 6  slide deck   
Instrumental variables:
Encouragement design (Holland 1988 - link)
Randomness in the real world ( Baiocchi, Cheng and Small - link)


Computing Corner: Extended Data Analysis Examples
Main event, sensitivity analysis computations:
package rbounds,
Rosenbaum pacakges sensivitymv and sensitivitymw   vignette:   Two R Packages for Sensitivity Analysis (examples from sections 2 and 3)in Observational Studies
                Rogosa sensitivity session                   CC_6 slides
To come, IV calculations: packages ivpack and ivmodel

Week 6 Review Questions
From Week 6 Computing Corner
1. Mercury example (2 controls) from section 3 and 6 of Rosenbaum vignette (linked in CC_6)
Fish often contains mercury. Does eating large quantities of fish increase levels of mercury in the blood? Data set mercury in the sensitivitymw package is from the 2009-2010 National Health and Nutrition Examination Survey (NHANES) and is the example in Rosenbaum (2014). There are 397 rows or matched triples and three columns, one treated with two controls. The values are methylmercury levels in blood. Column 1, Treated, describes an individual who had at least 15 servings of fish or shellfish in the previous month. Column 2, Zero, describes an individual who had 0 servings of fish or shellfish in the previous month. Column 3, One, describes an individual who had 1 serving of fish or shellfish in the previous month. In the comparison here, Zero and One are not distinguished; both are controls. Sets were matched for gender, age, education, household income, black race, Hispanic, and cigarette consumption.
a. describe the apparent effect of fish consumption and try out sensitivity analyses (for both tests and CI) for the apparent effect of fish. c.f Rosenbaum vignette sec 3.2
b. look at the effects of weighting (method w in the sensitivitymw manual) as theory and simulations suggest that a sensitivity analysis will be more powerful if matched sets with little variability are given little weight. c.f Rosenbaum vignette sec 6.3.       
 Solution for Review Question 1

2. As an exercise for the mechanics of setting up a data set for the sensitivity functions, use the smoking data from problem 2 in the take home, conduct 1:2 matching for the 30 smokers to 60 controls and create a data set analogous to the mercury data set (controls within a row interchangeable).       
 Solution for Review Question 2
It is easier to create the data set for the more common 1:1 matching situation (merge works without needing thought); solution for 1:1 matching with these data below       
 Review Question 2 with 1:1 matching

3. Compliance as a measured variable (week 6 lecture). In Stat209 week 7 we also examine compliance adjustments; both those based on a dichotomous compliance variable (as in the AIR paper linked week 6) and the much much more common measured compliance (often unwisely dichotomized to match Rubin formulation). The Efron-Feldman study ( handout description) used a continuous compliance measure. An artificial data set a data frame containing Compliance, Group, and Outcome for Stat209 HW7 is constructed so that ITT for cholesterol reduction is about 20 (compliance .6) and effect of cholestyramine for perfect compliance is about 35. Try out some IV estimators for CACE. Obtain ITT estimate of group (treatment) effect with a confidence interval. Try using G as an instrument for the Y ~ comp regression. What does that produce? Alternatively use the Rubin formulation with a dichotomous compliance indicator defined as TRUE for compliance > .8 in these data. What is your CACE estimate. What assumptions did you make? Compare with ITT estimate. In this problem the ivreg function from AER package is used for IV estimation.       
 Solution for Review Question 3, see problem 3 in these solutions



Week 7-- Instrumental Variable Methods continued

Collect PROBLEM SET papers

In the news                   Music combo:  (a) Swedish probability   and (b)  soundtrack for diabetes (AstraZeneca farxiga advert)
How your neighborhood can affect your health: A real-life experiment from Sweden        Exposure to Poor Neighborhoods Raises Refugees' Risk of Diabetes       5 Questions: Rita Hamad on why living in poor neighborhoods could be bad for your health
Publication:  Long-term effects of neighbourhood deprivation on diabetes risk: quasi-experimental evidence from a refugee dispersal policy in Sweden
    simple calculations: diabetes data
But maybe more bicycles would solve this: Cyclist Teaches Kids To Use Fun To Prevent Type 2 Diabetes

Lecture Topics                                             Lecture 7  slide deck   
Instrumental variables:
Inference and sensitivity ( Baiocchi, Cheng and Small - link)
Assumptions ( Angrist, Imbens and Rubin 1996 - link)


Computing Corner:
Main event, IV calculations: packages ivpack and ivmodel
Distinguished Guest Presenter, Hyunseung Kang co-Author and Maintainer, package ivmodel
vignette: to appear Journal of Statistical Software ivmodel: An R Package for Inference and Sensitivity Analysis of Instrumental Variables Models with One Endogenous Variable
Blog posting praising Mr. Kang presentation   Bay Area useR Group on youtube
Also package sisVIVE   Title Some Invalid Some Valid Instrumental Variables Estimator

To come. Natural experiments: Regression Discontinuity and Interrupted Time-series

Week 7 Review Questions
From Week 7 Computing Corner
1. Compliance data, IV analysis. Week 6 RQ3 used some artificial data from Stat209, imitating Efron-Feldman cholestyramine trial. That solution showed you the widely used ivreg function from package AER package. Redo the ivreg analyses using functions from the ivmodel package (described in CC week 7).       
 Solution for Review Question 1
2. Use the Card data, described in the ivmodel presentation and vignette, to carry out some basic IV analyses. Compare ivreg with some analyses using the ivmodel package.       
 Solution for Review Question 2


Week 8 -- Non-experimental designs

In the news                       Music: Sugar Sugar      Gilligan's Island version
Bench Science Rules?
  UCLA press release: Fructose alters hundreds of brain genes, which can lead to a wide range of diseases. UCLA scientists report that diet rich in omega-3 fatty acids can reverse the damage
Popular press: Sugar can cause brain damage, claim scientists (but salmon reverses it)
Research Paper: Systems Nutrigenomics Reveals Brain Gene Networks Linking Metabolic and Brain Disorders
Yet FDA hearts sugar: e.g. The FDA still thinks that cookies can be healthier than fish
   Extra   For Interrupted time series in Computing Corner: Proposition 47  California ballot measure blamed for shoplifting jump

Lecture Topics                                             Lecture 8  slide deck   
lecture 08: non-experimental designs
difference-in-differences - imbens and woolridge
discontinuity - lee and lemieux

Computing Corner:
                      Natural experiments: Regression Discontinuity and Interrupted Time-series
A.  Interrupted Time-series
Overviews:
Interrupted Time Series Quasi-Experiments Gene V Glass Arizona State University
Interrupted Time Series Designs In Health Technology Assessment: Lessons From Two Systematic Reviews Of Behavior Change Strategies Craig R. Ramsay University Of Aberdeen, International Journal Of Technology Assessment In Health Care, 19:4 (2003), 613-623.
Original publication (ozone data):
Box, G. E. P. and G. C. Tiao. 1975. Intervention Analysis with Applications to Economic and Environmental Problems." Journal of the American Statistical Association. 70:70-79. SAS example for ozone data     
Class example: Closing time (glm kludge)
  Time Series Analysis with R section 4.6
    Rogosa R-session
Applications:
Did fertility go up after the Oklahoma City bombing? An analysis of births in metropolitan counties in Oklahoma, 1990-1999. Demography, 2005.
Box-tiao time series models for impact assessment Evaluation Quarterly 1979
Interrupted time-series analysis and its application to behavioral data Donald P. Hartmann, John M. Gottman, Richard R. Jones, William Gardner, Alan E. Kazdin, and Russell S. Vaught J Appl Behav Anal. 1980 Winter; 13(4): 543-559.
Segmented regression analysis of interrupted time series studies in medication use research. By: Wagner, A. K.; Soumerai, S. B.; Zhang, F.; Ross-Degnan, D.. Journal of Clinical Pharmacy & Therapeutics, Aug2002, Vol. 27 Issue 4, p299-309,
R-packages:
tscountvignette       BayesSingleSub: Computation of Bayes factors for interrupted time-series designs

B.  Regression Discontinuity Designs     (bumped to week 9)
    Example from rdd manual (Stat209 handout)     ascii version
Angrist-Lavy Maimondes (class size) data     sections 1.3, 3.2, 5.2.3, 5.3 DOS text
              read data ang = read.dta("http://www.ats.ucla.edu/stat/stata/examples/methods_matter/chapter9/angrist.dta")
R-package--rdd;   Regression Discontinuity Estimation Author Drew Dimmery
Also Package rdrobust Title Robust data-driven statistical inference in Regression-Discontinuity designs       
 Slides for Regression Discontinuity CC
Regression Discontinuity Resources
       Stat209, Regression Discontinuity handout
Trochim W.M. & Cappelleri J.C. (1992). "Cutoff assignment strategies for enhancing randomized clinical trials." Controlled Clinical Trials, 13, 190-212.  pubmed link
Journal of Econometrics (special issue) Volume 142, Issue 2, February 2008, The regression discontinuity design: Theory and applications      Regression discontinuity designs: A guide to practice, Guido W. Imbens, Thomas Lemieux
    Another Econometric treatment
    Also from Journal of Econometrics (special issue) Volume 142, Issue 2, February 2008, The regression discontinuity design: Theory and applications  Waiting for Life to Arrive: A history of the regression-discontinuity design in Psychology, Statistics and Economics, Thomas D Cook
the original paper: Thistlewaite, D., and D. Campbell (1960): "Regression-Discontinuity Analysis: An Alternative to the Ex Post Facto Experiment," Journal of Educational Psychology, 51, 309-317.
Capitalizing on Nonrandom Assignment to Treatments: A Regression-Discontinuity Evaluation of a Crime-Control Program Richard A. Berk; David Rauma Journal of the American Statistical Association, Vol. 78, No. 381. (Mar., 1983), pp. 21-27. Jstor
Berk, R.A. & de Leeuw, J. (1999). "An evaluation of California's inmate classification system using a generalized regression discontinuity design." Journal of the American Statistical Association, 94(448), 1045-1052.  Jstor

To come: Dose-response functions: package causaldrf

Week 8 Review Questions
Computing Exercises
1. Interrupted Time Series example, redux
Create a version of the its 'closing time' example presented in class (example linked above) with the 50 months before intervention having mean fatality = 1 and after intervention mean fatality = 2.
Carry out the glm approximation to the time series analysis.       
 Solution for Review Question 1
2. Time 1 Time 2 observational data, Differences in Differences analysis.
We reuse some time-1, time-2 observational data generated to illustrate Lord's paradox (week 9, Stat209) -- gender differences in weight gain. (The 'paradox' is solved by Holland, Wainer, Rubin using potential outcomes.) The set up for these artificial data is females gain, males no change
  corr .7 within gender, equal vars time1 time 2 within gender
means
                M               F
X (t1)         170            120
Y (t2)         170            130 
comparison of "gains" 170 - 170 - (130 - 120) = -10    negative effect males (females gain more).
ancova: 170 - 130 - .7*(170 - 120) = 5 positive male effect
So: does being male cause a student to gain weight or lose weight?   Illustrate forms of diffs-in-diffs analyses.
wide form for these data      long form for these data       
 Solution for Review Question 2


Week 9 -- Graphical models and other approaches

In the news   Grand finale (trio)                    Music: My church
1. Going To Church Could Help You Live Longer   Publication: JAMA Internal Medicine
           Tyler VanderWeele, Harvard, 2015 book   Explanation in Causal Inference: Methods for Mediation and Interaction
2. Air Rage.    First-class cabin fuels 'air rage' among passengers flying coach   Publication: PNAS.
3. Retirement??    Retirement really COULD kill you: Researchers find those who work past 65 live longer                    bonus music   Only The Good Die Young
 Publication:       Association of retirement age with mortality: a population-based longitudinal study among older adults in the USA.   Journal of Epidemiology and Community Health.


Lecture Topics                                             Lecture 9  slide deck   
(i) directed acyclic graphs (link to paper with discussions)
      - related: single world intervention graphs (link)
(ii) inverse probability weighting (link)


Computing Corner:                       Dose response functions (and multiple groups): Beyond Binary Treatments
package causaldrf     vignette:   Estimating Average Dose Response Functions Using the R Package causaldrf      Rnw file for vignette      dot-R file for vignette
           Rogosa session, causaldrf examples
also covariate balancing propensity score, package CBPS
Background publications:
The Propensity Score with Continuous Treatments
Causal Inference With General Treatment Regimes: Generalizing the Propensity Score, Journal of the American Statistical Association, Vol. 99, No. 467 (September), pp. 854-866.       
 Slides for Dose-Response, CC_9

    bumped: graphical model (and do-calculus) applications: one resource,   Identifying Causal Effects with the R Package causaleffect


Week 9 Review Questions
Computing Exercises
1. Regression Discontinuity, classic "Sharp" design. Replicate the package rdd toy example: cutpoint = 0, sharp design, with treatment effect of 3 units (instead of 10). Try out the analysis of covariance (Rubin 1977) estimate and compare with rdd output and plot. Pick off the observations used in the Half-BW estimate and verify using t-test or wilcoxon.
Extra: try out also the rdrobust package for this sharp design.       
 Solution for Review Question 1

2. Systematic Assignment, "fuzzy design". Probabilistic assignment on the basis of the covariate.
i. Create artificial data with the following specification. 10,000 observations; premeasure (Y_uc in my session) gaussian mean 10 variance 1. Effect of intervention (rho) if in the treatment group is 2 (or close to 2) and uncorrelated with Y_uc. Probability of being in the treatment group depends on Y_uc but is not a deterministic step-function ("sharp design"): Pr(treatment|Y_uc) = pnorm(Y_uc, 10,1) . Plot that function.
ii. Try out analysis of covariance with Y_uc as covariate. Obtain a confidence interval for the effect of the treatment.
iii. Try out the fancy econometric estimators (using finite support) as in the rdd package. See if you find that they work poorly in this very basic fuzzy design example.
Extra: try out also the rdrobust package for this fuzzy design.       
 Solution for Review Question 2

3. Dose-Response functions. IPW (aka importance sampling) can't hit the curve? Can't hit anything??
In week 9 Computing Corner we showed results for ADRF (average dose-response function) estimates using Imbens very clever artificial data example from the linked causaldrf vignette (see also CC_9 slides).
IPW results (see Weeks 3 and 4 Computing Corner for examples for binary treatements) were notable in apparant bad bad performance (all other estimates did pretty well). Keep in mind this artificial data test is not even a "phase 2" hurdle, as we are given the selection variables (X_1, X_2) that are responsible for individuals selecting dose (here denoted by T) other than randomness.
I was asked in class why IPW appears to flop so badly, and gave a very brief response given time, but I should have added that I am a skeptic about IPW value even in the binary treatment scenario.
As IPW is dominant in applications like long-tern occupation exposures (to bad stuff), the dose-reponse setting is quite relevant. The artificial data ADRF has an important feature of a non-monotonic dip, remeniscent of alcohol or even salt (a bit above 0 is better than zero) for health outcomes. So for another look at IPW, I tried to make a much easier example, with basically a straight-line ADRF (just with a little wiggle) by limiting dose (T) to > .5.
So try out the comparison of the hi_estimate (shown in class) and the iptw_estimate both from the causaldrf package with the true ADRF from the artificial data construction using values T > .5 (about half the data).
Are we any happier with the value of IPW (importance sampling)? Solution indicates to me: "no", YMMV.       
 Solution for Review Question 3