EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Semiparametric Estimation of Treatment Effects in Randomized Experiments

Download or read book Semiparametric Estimation of Treatment Effects in Randomized Experiments written by Susan Athey and published by . This book was released on 2021 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: We develop new semiparametric methods for estimating treatment effects. We focus on a setting where the outcome distributions may be thick tailed, where treatment effects are small, where sample sizes are large and where assignment is completely random. This setting is of particular interest in recent experimentation in tech companies. We propose using parametric models for the treatment effects, as opposed to parametric models for the full outcome distributions. This leads to semiparametric models for the outcome distributions. We derive the semiparametric efficiency bound for this setting, and propose efficient estimators. In the case with a constant treatment effect one of the proposed estimators has an interesting interpretation as a weighted average of quantile treatment effects, with the weights proportional to (minus) the second derivative of the log of the density of the potential outcomes. Our analysis also results in an extension of Huber's model and trimmed mean to include asymmetry and a simplified condition on linear combinations of order statistics, which may be of independent interest.

Book Semiparametric Estimation of Treatment Effects Parameters

Download or read book Semiparametric Estimation of Treatment Effects Parameters written by Sergio Pinheiro Firpo and published by . This book was released on 2003 with total page 264 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Online Testing and Semiparametric Estimation of Complex Treatment Effects

Download or read book Online Testing and Semiparametric Estimation of Complex Treatment Effects written by Miao Yu and published by . This book was released on 2021 with total page 88 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Semiparametric Efficient Estimation of Treatment Effect in a Pretest Posttest Study with Missing Data

Download or read book Semiparametric Efficient Estimation of Treatment Effect in a Pretest Posttest Study with Missing Data written by and published by . This book was released on 2004 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Inference on treatment effect in a pretest-posttest study is a routine objective in medicine, public health, and other fields, and a number of approaches have been advocated. Typically, subjects are randomized to two treatments, the response is measured at baseline and a prespecified follow & ndash;up time, and interest focuses on the effect of treatment on follow--up mean response. Covariate information at baseline and in the intervening period until follow--up may also be collected. Missing posttest response for some subjects is routine, and disregarding these missing cases can lead to biased and inefficient inference. Despite the widespread popularity of this design, a consensus on an appropriate method of analysis when no data are missing, let alone on an accepted practice for taking account of missing follow--up response, does not exist. We take a semiparametric perspective, making no assumptions about the distributions of baseline and posttest responses. Exploiting the work of Robins et al. (1994), we characterize the class of all consistent estimators for treatment effect, identify the efficient member of this class, and propose practical procedures for implementation. The result is a unified framework for handling pretest--posttest inferences when follow--up response may be missing at random that allows the analyst to incorporate baseline and intervening information so as to improve efficiency of inference. Simulation studies and application to data from an HIV clinical trial illustrate the utility of the approach.

Book Vetus latina

    Book Details:
  • Author :
  • Publisher :
  • Release : 1953
  • ISBN :
  • Pages : pages

Download or read book Vetus latina written by and published by . This book was released on 1953 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Targeted Learning

    Book Details:
  • Author : Mark J. van der Laan
  • Publisher : Springer Science & Business Media
  • Release : 2011-06-17
  • ISBN : 1441997822
  • Pages : 628 pages

Download or read book Targeted Learning written by Mark J. van der Laan and published by Springer Science & Business Media. This book was released on 2011-06-17 with total page 628 pages. Available in PDF, EPUB and Kindle. Book excerpt: The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the target parameter representing the scientific question of interest. This book is aimed at both statisticians and applied researchers interested in causal inference and general effect estimation for observational and experimental data. Part I is an accessible introduction to super learning and the targeted maximum likelihood estimator, including related concepts necessary to understand and apply these methods. Parts II-IX handle complex data structures and topics applied researchers will immediately recognize from their own research, including time-to-event outcomes, direct and indirect effects, positivity violations, case-control studies, censored data, longitudinal data, and genomic studies.

Book Essays on Treatment Effect Estimation and Treatment Choice Learning

Download or read book Essays on Treatment Effect Estimation and Treatment Choice Learning written by Liqiang Shi and published by . This book was released on 2022 with total page 119 pages. Available in PDF, EPUB and Kindle. Book excerpt: This dissertation consists of three chapters that study treatment effect estimation and treatment choice learning under the potential outcome framework (Neyman, 1923; Rubin, 1974). The first two chapters study how to efficiently combine an experimental sample with an auxiliary observational sample when estimating treatment effects. In chapter 1, I derive a new semiparametric efficiency bound under the two-sample setup for estimating ATE and other functions of the average potential outcomes. The efficiency bound for estimating ATE with an experimental sample alone is derived in Hahn (1998) and has since become an important reference point for studies that aim at improving the ATE estimation. This chapter answers how an auxiliary sample containing only observable characteristics (covariates, or features) can lower this efficiency bound. The newly obtained bound has an intuitive expression and shows that the (maximum possible) amount of variance reduction depends positively on two factors: 1) the size of the auxiliary sample, and 2) how well the covariates predict the individual treatment effect. The latter naturally motivates having high dimensional covariates and the adoption of modern machine learning methods to avoid over-fitting. In chapter 2, under the same setup, I propose a two-stage machine learning (ML) imputation estimator that achieves the efficiency bound derived in chapter 1, so that no other regular estimators for ATE can have lower asymptotic variance in the same setting. This estimator involves two steps. In the first step, conditional average potential outcome functions are estimated nonparametrically via ML, which are then used to impute the unobserved potential outcomes for every unit in both samples. In the second step, the imputed potential outcomes are aggregated together in a robust way to produce the final estimate. Adopting the cross-fitting technique proposed in Chernozhukov et al. (2018), our two-step estimator can use a wide range of supervised ML tools in its first step, while maintaining valid inference to construct confidence intervals and perform hypothesis tests. In fact, any method that estimates the relevant conditional mean functions consistently in square norm, with no rate requirement, will lead to efficiency through the proposed two-step procedure. I also show that cross-fitting is not necessary when the first step is implemented via LASSO or post-LASSO. Furthermore, our estimator is robust in the sense that it remains consistent and root n normal (no longer efficient) even if the first step estimators are inconsistent. Chapter 3 (coauthored with Kirill Ponomarev) studies model selection in treatment choice learning. When treatment effects are heterogeneous, a decision maker, given either experiment or quasi-experiment data, can attempt to find a policy function that maps observable characteristics to treatment choices, aiming at maximizing utilitarian welfare. When doing so, one often has to pick a constrained class of functions as candidates for the policy function. The choice of this function class poses a model selection problem. Following Mbakop and Tabord-Meehan (2021) we propose a policy learning algorithm that incorporates data-driven model selection. Our method also leverages doubly robust estimation (Athey and Wager, 2021) so that it could retain the optimal root n rate in expected regret in general setups including quasi-experiments where propensity scores are unknown. We also refined some related results in the literature and derived a new finite sample lower bound on expected regret to show that the root n rate is indeed optimal.

Book Causality

    Book Details:
  • Author : Carlo Berzuini
  • Publisher : John Wiley & Sons
  • Release : 2012-06-04
  • ISBN : 1119941733
  • Pages : 387 pages

Download or read book Causality written by Carlo Berzuini and published by John Wiley & Sons. This book was released on 2012-06-04 with total page 387 pages. Available in PDF, EPUB and Kindle. Book excerpt: A state of the art volume on statistical causality Causality: Statistical Perspectives and Applications presents a wide-ranging collection of seminal contributions by renowned experts in the field, providing a thorough treatment of all aspects of statistical causality. It covers the various formalisms in current use, methods for applying them to specific problems, and the special requirements of a range of examples from medicine, biology and economics to political science. This book: Provides a clear account and comparison of formal languages, concepts and models for statistical causality. Addresses examples from medicine, biology, economics and political science to aid the reader's understanding. Is authored by leading experts in their field. Is written in an accessible style. Postgraduates, professional statisticians and researchers in academia and industry will benefit from this book.

Book Handbook of Field Experiments

Download or read book Handbook of Field Experiments written by Esther Duflo and published by Elsevier. This book was released on 2017-03-21 with total page 530 pages. Available in PDF, EPUB and Kindle. Book excerpt: Handbook of Field Experiments provides tactics on how to conduct experimental research, also presenting a comprehensive catalog on new results from research and areas that remain to be explored. This updated addition to the series includes an entire chapters on field experiments, the politics and practice of social experiments, the methodology and practice of RCTs, and the econometrics of randomized experiments. These topics apply to a wide variety of fields, from politics, to education, and firm productivity, providing readers with a resource that sheds light on timely issues, such as robustness and external validity. Separating itself from circumscribed debates of specialists, this volume surpasses in usefulness the many journal articles and narrowly-defined books written by practitioners. - Balances methodological insights with analyses of principal findings and suggestions for further research - Appeals broadly to social scientists seeking to develop an expertise in field experiments - Strives to be analytically rigorous - Written in language that is accessible to graduate students and non-specialist economists

Book A First Course in Bayesian Statistical Methods

Download or read book A First Course in Bayesian Statistical Methods written by Peter D. Hoff and published by Springer Science & Business Media. This book was released on 2009-06-02 with total page 270 pages. Available in PDF, EPUB and Kindle. Book excerpt: A self-contained introduction to probability, exchangeability and Bayes’ rule provides a theoretical understanding of the applied material. Numerous examples with R-code that can be run "as-is" allow the reader to perform the data analyses themselves. The development of Monte Carlo and Markov chain Monte Carlo methods in the context of data analysis examples provides motivation for these computational methods.

Book Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score

Download or read book Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score written by Keisuke Hirano and published by . This book was released on 2000 with total page 68 pages. Available in PDF, EPUB and Kindle. Book excerpt: We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatment-control average comparisons can be removed by adjusting for differences in the pre-treatment variables. Rosenbaum and Rubin (1983, 1984) show that adjusting solely for differences between treated and control units in a scalar function of the pre-treatment, the propensity score, also removes the entire bias associated with differences in pre-treatment variables. Thus it is possible to obtain unbiased estimates of the treatment effect without conditioning on a possibly high-dimensional vector of pre-treatment variables. Although adjusting for the propensity score removes all the bias, this can come at the expense of efficiency. We show that weighting with the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to efficient estimates of the various average treatment effects. This result holds whether the pre-treatment variables have discrete or continuous distributions. We provide intuition for this result in a number of ways. First we show that with discrete covariates, exact adjustment for the estimated propensity score is identical to adjustment for the pre-treatment variables. Second, we show that weighting by the inverse of the estimated propensity score can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score. Finally, we make a connection to results to other results on efficient estimation through weighting in the context of variable probability sampling.

Book Targeted Maximum Likelihood Estimation of Treatment Effects in Randomized Controlled Trials and Drug Safety Analysis

Download or read book Targeted Maximum Likelihood Estimation of Treatment Effects in Randomized Controlled Trials and Drug Safety Analysis written by Kelly Moore and published by . This book was released on 2009 with total page 238 pages. Available in PDF, EPUB and Kindle. Book excerpt: In most randomized controlled trials (RCTs), investigators typically rely on estimators of causal effects that do not exploit the information in the many baseline covariates that are routinely collected in addition to treatment and the outcome. Ignoring these covariates can lead to a significant loss is estimation efficiency and thus power. Statisticians have underscored the gain in efficiency that can be achieved from covariate adjustment in RCTs with a focus on problems involving linear models. Despite recent theoretical advances, there has been a reluctance to adjust for covariates based on two primary reasons; 1) covariate-adjusted estimates based on non-linear regression models have been shown to be less precise than unadjusted methods, and, 2) concern over the opportunity to manipulate the model selection process for covariate adjustment in order to obtain favorable results. This dissertation describes statistical approaches for covariate adjustment in RCTs using targeted maximum likelihood methodology for estimation of causal effects with binary and right-censored survival outcomes. Chapter 2 provides the targeted maximum likelihood approach to covariate adjustment in RCTs with binary outcomes, focusing on the estimation of the risk difference, relative risk and odds ratio. In such trials, investigators generally rely on the unadjusted estimate as the literature indicates that covariate-adjusted estimates based on logistic regression models are less efficient. The crucial step that has been missing when adjusting for covariates is that one must integrate/average the adjusted estimate over those covariates in order to obtain the population-level effect. Chapter 2 shows that covariate adjustment in RCTs using logistic regression models can be mapped, by averaging over the covariate(s), to obtain a fully robust and efficient estimator of the marginal effect, which equals a targeted maximum likelihood estimator. Simulation studies are provided that demonstrate that this targeted maximum likelihood method increases efficiency and power over the unadjusted method, particularly for smaller sample sizes, even when the regression model is misspecified. Chapter 3 applies the methodology presented in Chapter 3 to a sampled RCT dataset with a binary outcome to further explore the origin of the gains in efficiency and provide a criterion for determining whether a gain in efficiency can be achieved with covariate adjustment over the unadjusted method. This chapter demonstrates through simulation studies and the data analysis that not only is the relation between $R̂2$ and efficiency gain important, but also the presence of empirical confounding. Based on the results of these studies, a complete strategy for analyzing these type of data is formalized that provides a robust method for covariate adjustment while protecting investigators from misuse of these methods for obtaining favorable inference. Chapters 4 and 5 focus on estimation of causal effects with right-censored survival outcomes. Time-to-event outcomes are naturally subject to right-censoring due to early patient withdrawals. In chapter 4, the targeted maximum likelihood methodology is applied to the estimation of treatment specific survival at a fixed end-point in time. In chapter 5, the same methodology is applied to provide a competitor to the logrank test. The proposed covariate adjusted estimators, under no or uninformative censoring, do not require any additional parametric modeling assumptions, and under informative censoring, are consistent under consistent estimation of the censoring mechanism or the conditional hazard for survival. These targeted maximum likelihood estimators have two important advantages over the Kaplan-Meier and logrank approaches; 1) they exploit covariates to improve efficiency, and 2) they are consistent in the presence of informative censoring. These properties are demonstrated through simulation studies. Chapter 6 concludes with a summary of the preceding chapters and a discussion of future research directions.

Book Semiparametric Approaches for Average Causal Effect and Precision Medicine

Download or read book Semiparametric Approaches for Average Causal Effect and Precision Medicine written by Trinetri Ghosh and published by . This book was released on 2021 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Average causal effect is often used to compare the treatments or interventions in both randomized and observational studies. It has a wide variety of applications in medical, natural, and social sciences, for example, psychology, political science, economics, and so on. Due to the increased availability of high-dimensional pre-treatment information sets, dimension reduction is a major methodological issue in observational studies to estimate the average causal effect of a non-randomized treatment. Often assumptions are made to ensure model identifiability and to establish theoretical guarantees for nuisance conditional models. But these assumptions can be less flexible. In the first work (Chapter 2), to estimate the average causal effect in an observational study, we use a semiparametric locally efficient dimension-reduction approach to assess the treatment assignment mechanisms and average responses in both the treated and the non-treated groups. We then integrate our results using imputation, inverse probability weighting, and doubly robust augmentation estimators. Doubly robust estimators are locally efficient, and imputation estimators are super-efficient when the response models are correct. To take advantage of both procedures, we introduce a shrinkage estimator that combines the two. The proposed estimators retain the double robustness property while improving on the variance when the response model is correct. We demonstrate the performance of these estimators using simulated experiments and a real data set on the effect of maternal smoking on baby birth weight. In the second work (Chapter 3), we implemented semiparametric efficient method in an emerging topic, precision medicine, an approach to tailoring disease prevention and treatment that takes into account individual variability in genes, environment, and lifestyle for each person. The goal of precision medicine is to deploy appropriate and optimal treatment based on the context of a patient's individual characteristics to maximize the clinical benefit. In this work, we propose a new modeling and estimation approach to select the optimal treatment regime from two different options through constructing a robust estimating equation. The method is protected against misspecification of the propensity score function or the outcome regression model for the non-treated group or the potential non-monotonic treatment difference model. Nonparametric smoothing and dimension reduction are incorporated to estimate the treatment difference model. We then identify the optimal treatment by maximizing the value function and established theoretical properties of the treatment assignment strategy. We illustrate the performance and effectiveness of our proposed estimators through extensive simulation studies and a real-world application to Huntington's disease patients. In the third work (Chapter 4), we aim to obtain optimal individualized treatment rules in the covariate-adjusted randomization clinical trial with many covariates. We model the treatment effect with an unspecified function of a single index of the covariates and leave the baseline response completely arbitrary. We devise a class of estimators to consistently estimate the treatment effect function and its associated index while bypassing the estimation of the baseline response, which is subject to the curse of dimensionality. We further develop inference tools to identify predictive covariates and isolate effective treatment regions. The usefulness of the methods is demonstrated in both simulations and a clinical data example.

Book Handbook of Quantile Regression

Download or read book Handbook of Quantile Regression written by Roger Koenker and published by CRC Press. This book was released on 2017-10-12 with total page 739 pages. Available in PDF, EPUB and Kindle. Book excerpt: Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.

Book Identification for Prediction and Decision

Download or read book Identification for Prediction and Decision written by Charles F. Manski and published by Harvard University Press. This book was released on 2009-06-30 with total page 370 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book is a full-scale exposition of Charles Manski's new methodology for analyzing empirical questions in the social sciences. He recommends that researchers first ask what can be learned from data alone, and then ask what can be learned when data are combined with credible weak assumptions. Inferences predicated on weak assumptions, he argues, can achieve wide consensus, while ones that require strong assumptions almost inevitably are subject to sharp disagreements. Building on the foundation laid in the author's Identification Problems in the Social Sciences (Harvard, 1995), the book's fifteen chapters are organized in three parts. Part I studies prediction with missing or otherwise incomplete data. Part II concerns the analysis of treatment response, which aims to predict outcomes when alternative treatment rules are applied to a population. Part III studies prediction of choice behavior. Each chapter juxtaposes developments of methodology with empirical or numerical illustrations. The book employs a simple notation and mathematical apparatus, using only basic elements of probability theory.

Book Methods in Comparative Effectiveness Research

Download or read book Methods in Comparative Effectiveness Research written by Constantine Gatsonis and published by CRC Press. This book was released on 2017-02-24 with total page 547 pages. Available in PDF, EPUB and Kindle. Book excerpt: Comparative effectiveness research (CER) is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care (IOM 2009). CER is conducted to develop evidence that will aid patients, clinicians, purchasers, and health policy makers in making informed decisions at both the individual and population levels. CER encompasses a very broad range of types of studies—experimental, observational, prospective, retrospective, and research synthesis. This volume covers the main areas of quantitative methodology for the design and analysis of CER studies. The volume has four major sections—causal inference; clinical trials; research synthesis; and specialized topics. The audience includes CER methodologists, quantitative-trained researchers interested in CER, and graduate students in statistics, epidemiology, and health services and outcomes research. The book assumes a masters-level course in regression analysis and familiarity with clinical research.