EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Estimating Causal Treatment Effects Via the Propensity Score and Estimating Survival Distributions in Clinical Trials That Follow Two Stage Randomization Designs

Download or read book Estimating Causal Treatment Effects Via the Propensity Score and Estimating Survival Distributions in Clinical Trials That Follow Two Stage Randomization Designs written by and published by . This book was released on 2001 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Estimation of treatment effects with causalinterpretation from obervational data is complicated by the fact thatexposure to treatment is confounded with subject characteristics. Thepropensity score, the probability of exposure to treatment conditionalon covariates, is the basis for two competing classes of approachesfor adjusting for confounding: methods based on stratification ofobservations by quantiles of estimated propensity scores, and methods based on weighting individual observations by weights depending onestimated propensity scores. We review these approaches andinvestigate their relative performance. Some clinical trials follow a design in which patientsare randomized to a primary therapy upon entry followed by anotherrandomization to maintenance therapy contingent upon diseaseremission. Ideally, analysis would allow different treatmentpolicies, i.e. combinations of primary and maintenance therapy ifspecified up-front, to be compared. Standard practice is to conductseparate analyses for the primary and follow-up treatments, which doesnot address this issue directly. We propose consistent estimators ofthe survival distribution and mean survival time for each treatmentpolicy in such two-stage studies and derive large sampleproperties. The methods are demonstrated on a leukemia clinical trialdata set and through simulation.

Book Estimating Casual Treatment Effects Via the Propensity Score and Estimating Survival Distributions in Clinical Trials that Follow Two stage Randomization Designs

Download or read book Estimating Casual Treatment Effects Via the Propensity Score and Estimating Survival Distributions in Clinical Trials that Follow Two stage Randomization Designs written by Jared Kenneth Lunceford and published by . This book was released on 2001 with total page 75 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Efficient Estimation of The Survival Distribution and Related Quantities of Treatment Policies in Two Stage Randomization Designs in Clinical Trials

Download or read book Efficient Estimation of The Survival Distribution and Related Quantities of Treatment Policies in Two Stage Randomization Designs in Clinical Trials written by and published by . This book was released on 2003 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Two-stage designs are common in therapeutic clinical trials such as Cancer or AIDS treatments. In a two-stage design, patients are initially treated with one induction (primary) therapy and then depending upon their response and consent, are treated by a maintenance therapy, sometimes to intensify the effect of the first stage therapy. The goal is to compare different combinations of primary and maintenance (intensification) therapies to find the combination that is most beneficial. To achieve this goal, patients are initially randomized to one of several induction therapies and then if they are eligible for the second-stage randomization, are offered to be randomized to one of several maintenance therapies. In practice, the analysis is usually conducted in two separate stages which does not directly address the major objective of finding the best combination. Recently Lunceford et al. (2002, Biometrics, 58, 48-57) introduced ad hoc estimators for the survival distribution and mean restricted survival time under different treatment policies. These estimators are consistent but not efficient, and do not include information from auxiliary covariates. In this dissertation study we derive estimators that are easy to compute and are more efficient than previous estimators. We also show how to improve efficiency further by taking into account additional information from auxiliary variables. Large sample properties of these estimators are derived and comparisons with other estimators are made using simulation. We apply our estimators to a leukemia clinical trial data set that motivated this study.

Book Efficient Estimation of the Survival Distribution and Related Quantities of Treatment Policies in Two stage Randomization Designs in Clinical Trials

Download or read book Efficient Estimation of the Survival Distribution and Related Quantities of Treatment Policies in Two stage Randomization Designs in Clinical Trials written by Abdus Shakoor Fazlul Wahed and published by . This book was released on 2003 with total page 92 pages. Available in PDF, EPUB and Kindle. Book excerpt: Keywords: Missing data, Survival distributions, Semiparametric inference, Two-stage designs.

Book Statistical Analysis in Two Stage Randomization Designs in Clinical Trials

Download or read book Statistical Analysis in Two Stage Randomization Designs in Clinical Trials written by and published by . This book was released on 2004 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Two-stage randomization designs are becoming more common in many clinical trials related to diseases such as cancer and HIV, where an induction therapy is given followed by a maintenance therapy depending on patients' response and consent. The main interest is to compare combinations of induction and maintenance therapies and to find the combination leading to the longest average survival time. However, in practice, the data analysis is typically conducted separately in two stages. In this Thesis, we tackle the problem based on treatment policies. We use the concepts of counting process and risk set as described by Fleming and Harrington (1991) to find weighted estimating equations whose solution gives an estimator for the cumulative hazard function which, in turn, is used to derive an estimator for the overall survival distribution under a treatment policy with right-censored data. We call this estimator as the Weighted Risk Set Estimator (WRSE). We show that the WRSE is consistent and asymptotically normally distributed. In addition to survival distribution estimation, we also consider the hypothesis testing problem. Since the log rank test is the common method for hypothesis testing in survival analysis, we propose a test statistic using an inverse weighted version of the log rank test. We use simulation studies to demonstrate the properties of our method and use data from a clinical trial, Protocol 88923, conducted by the Cancer and Leukemia Group B (CALGB) to illustrate how to implement the method.

Book Small Clinical Trials

    Book Details:
  • Author : Institute of Medicine
  • Publisher : National Academies Press
  • Release : 2001-01-01
  • ISBN : 0309171148
  • Pages : 221 pages

Download or read book Small Clinical Trials written by Institute of Medicine and published by National Academies Press. This book was released on 2001-01-01 with total page 221 pages. Available in PDF, EPUB and Kindle. Book excerpt: Clinical trials are used to elucidate the most appropriate preventive, diagnostic, or treatment options for individuals with a given medical condition. Perhaps the most essential feature of a clinical trial is that it aims to use results based on a limited sample of research participants to see if the intervention is safe and effective or if it is comparable to a comparison treatment. Sample size is a crucial component of any clinical trial. A trial with a small number of research participants is more prone to variability and carries a considerable risk of failing to demonstrate the effectiveness of a given intervention when one really is present. This may occur in phase I (safety and pharmacologic profiles), II (pilot efficacy evaluation), and III (extensive assessment of safety and efficacy) trials. Although phase I and II studies may have smaller sample sizes, they usually have adequate statistical power, which is the committee's definition of a "large" trial. Sometimes a trial with eight participants may have adequate statistical power, statistical power being the probability of rejecting the null hypothesis when the hypothesis is false. Small Clinical Trials assesses the current methodologies and the appropriate situations for the conduct of clinical trials with small sample sizes. This report assesses the published literature on various strategies such as (1) meta-analysis to combine disparate information from several studies including Bayesian techniques as in the confidence profile method and (2) other alternatives such as assessing therapeutic results in a single treated population (e.g., astronauts) by sequentially measuring whether the intervention is falling above or below a preestablished probability outcome range and meeting predesigned specifications as opposed to incremental improvement.

Book The Prevention and Treatment of Missing Data in Clinical Trials

Download or read book The Prevention and Treatment of Missing Data in Clinical Trials written by National Research Council and published by National Academies Press. This book was released on 2011-01-21 with total page 162 pages. Available in PDF, EPUB and Kindle. Book excerpt: Randomized clinical trials are the primary tool for evaluating new medical interventions. Randomization provides for a fair comparison between treatment and control groups, balancing out, on average, distributions of known and unknown factors among the participants. Unfortunately, these studies often lack a substantial percentage of data. This missing data reduces the benefit provided by the randomization and introduces potential biases in the comparison of the treatment groups. Missing data can arise for a variety of reasons, including the inability or unwillingness of participants to meet appointments for evaluation. And in some studies, some or all of data collection ceases when participants discontinue study treatment. Existing guidelines for the design and conduct of clinical trials, and the analysis of the resulting data, provide only limited advice on how to handle missing data. Thus, approaches to the analysis of data with an appreciable amount of missing values tend to be ad hoc and variable. The Prevention and Treatment of Missing Data in Clinical Trials concludes that a more principled approach to design and analysis in the presence of missing data is both needed and possible. Such an approach needs to focus on two critical elements: (1) careful design and conduct to limit the amount and impact of missing data and (2) analysis that makes full use of information on all randomized participants and is based on careful attention to the assumptions about the nature of the missing data underlying estimates of treatment effects. In addition to the highest priority recommendations, the book offers more detailed recommendations on the conduct of clinical trials and techniques for analysis of trial data.

Book Statistical Techniques for Estimating Causal Effects in Biomedical Research

Download or read book Statistical Techniques for Estimating Causal Effects in Biomedical Research written by Claudia Coscia Requena and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Causal inference methods are statistical techniques used to analyse the causal effect of a treatment/exposure on an outcome. Their use is increasing in the last decade, especially in the framework of observational studies where the no randomization of the treatment/exposure may lead to confounding bias. These methods present great advantages versus classic regression models due to their capability of reducing and controlling for confounding bias.This thesis begins with the use of known techniques applied in real clinical scenarios, second, a lack of developed statistical methods to estimate causal effects in complex epidemiological scenarios is noted. These findings support the main objective of this thesis, which is the development of causal inference methods to better understand and diagnose clinical and epidemiological outcomes. A comparison between the Propensity Score and classic regression models was made using an Intensive Care Unit database where it was shown that, in presence of confounding bias, Propensity Score performed better. Moreover, based on a systematic review and metaanalysis, causal estimates from Propensity Score and Randomized Controlled Trials were compared. It was observed that similar estimations were obtained in both approaches...

Book Developing a Protocol for Observational Comparative Effectiveness Research  A User s Guide

Download or read book Developing a Protocol for Observational Comparative Effectiveness Research A User s Guide written by Agency for Health Care Research and Quality (U.S.) and published by Government Printing Office. This book was released on 2013-02-21 with total page 236 pages. Available in PDF, EPUB and Kindle. Book excerpt: This User’s Guide is a resource for investigators and stakeholders who develop and review observational comparative effectiveness research protocols. It explains how to (1) identify key considerations and best practices for research design; (2) build a protocol based on these standards and best practices; and (3) judge the adequacy and completeness of a protocol. Eleven chapters cover all aspects of research design, including: developing study objectives, defining and refining study questions, addressing the heterogeneity of treatment effect, characterizing exposure, selecting a comparator, defining and measuring outcomes, and identifying optimal data sources. Checklists of guidance and key considerations for protocols are provided at the end of each chapter. The User’s Guide was created by researchers affiliated with AHRQ’s Effective Health Care Program, particularly those who participated in AHRQ’s DEcIDE (Developing Evidence to Inform Decisions About Effectiveness) program. Chapters were subject to multiple internal and external independent reviews. More more information, please consult the Agency website: www.effectivehealthcare.ahrq.gov)

Book Secondary Analysis of Electronic Health Records

Download or read book Secondary Analysis of Electronic Health Records written by MIT Critical Data and published by Springer. This book was released on 2016-09-09 with total page 435 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book trains the next generation of scientists representing different disciplines to leverage the data generated during routine patient care. It formulates a more complete lexicon of evidence-based recommendations and support shared, ethical decision making by doctors with their patients. Diagnostic and therapeutic technologies continue to evolve rapidly, and both individual practitioners and clinical teams face increasingly complex ethical decisions. Unfortunately, the current state of medical knowledge does not provide the guidance to make the majority of clinical decisions on the basis of evidence. The present research infrastructure is inefficient and frequently produces unreliable results that cannot be replicated. Even randomized controlled trials (RCTs), the traditional gold standards of the research reliability hierarchy, are not without limitations. They can be costly, labor intensive, and slow, and can return results that are seldom generalizable to every patient population. Furthermore, many pertinent but unresolved clinical and medical systems issues do not seem to have attracted the interest of the research enterprise, which has come to focus instead on cellular and molecular investigations and single-agent (e.g., a drug or device) effects. For clinicians, the end result is a bit of a “data desert” when it comes to making decisions. The new research infrastructure proposed in this book will help the medical profession to make ethically sound and well informed decisions for their patients.

Book Using a Two stage Propensity Score Matching Strategy and Multilevel Modeling to Estimate Treatment Effects in a Multisite Observational Study

Download or read book Using a Two stage Propensity Score Matching Strategy and Multilevel Modeling to Estimate Treatment Effects in a Multisite Observational Study written by Jordan Harry Rickles and published by . This book was released on 2012 with total page 208 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this study I present, demonstrate, and test a method that extends the Stuart and Rubin (2008) multiple control group matching strategy to a multisite setting. Three primary phases define the proposed method: (1) a design phase, in which one uses a two-stage matching strategy to construct treatment and control groups that are well balanced along both unit- and site-level key pretreatment covariates; (2) an adjustment phase, in which the observed outcomes for non-local control group matches are adjusted to account for differences in the local and non-local matched control units; and (3) an analysis phase, in which one estimates average causal effects for the treated units and investigates heterogeneity in causal effects through multilevel modeling. The main novelty of the proposed method occurs in the design phase, where propensity score matching is executed in two stages. In the first stage, treatment units are matched to control units within the same site. In the second stage, treatment units without an acceptable within-site match are matched to control units in another site (between-site match). The two-stage matching method provides researchers with an alternative to strict within-site matching or matching that ignores the nested data structure (pooled matching). I employ an empirical illustration and a set of simulation studies to test the utility and feasibility of the proposed two-stage matching method. The results document the two-stage matching method's conceptual appeal, but indicate that effect estimation under the two-stage matching method does not, in general, outperform more traditional matching-based or regression-based methods. Alternative specifications within the proposed method can improve performance of two-stage matching. In addition to extending the work of Stuart and Rubin, this study complements the small set of studies that have examined propensity score matching in multisite settings and provides guidance for researchers looking to estimate treatment effects from a multisite observational study. The dissertation concludes with directions for future research and considerations for researchers conducting multisite observational studies.

Book Instrumental Variable and Propensity Score Methods for Bias Adjustment in Non linear Models

Download or read book Instrumental Variable and Propensity Score Methods for Bias Adjustment in Non linear Models written by Fei Wan and published by . This book was released on 2015 with total page 208 pages. Available in PDF, EPUB and Kindle. Book excerpt: Unmeasured confounding is a common concern when clinical and health services researchers attempt to estimate a treatment effect using observational data or randomized studies with non-perfect compliance. To address this concern, instrumental variable (IV) methods, such as two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI), have been widely adopted. In many clinical studies of binary and survival outcomes, 2SRI has been accepted as the method of choice over 2SPS but a compelling theoretical rationale has not been postulated. First, we directly compare the bias in the causal hazard ratio estimated by these two IV methods. Under the potential outcome and principal stratification framework, we derive closed form solutions for asymptotic bias in estimating the causal hazard ratio among compliers for both the 2SPS and 2SRI methods by assuming survival time follows the Weibull distribution with random censoring. When there is no unmeasured confounding and no always takers, our analytic results show that 2SRI is generally asymptotically unbiased but 2SPS is not. However, when there is substantial unmeasured confounding, 2SPS performs better than 2SRI with respect to bias under certain scenarios. We use extensive simulation studies to confirm the analytic results from our closed-form solutions. We apply these two methods to prostate cancer treatment data from SEER-Medicare and compare these 2SRI and 2SPS estimates to results from two published randomized trials. Next, we propose a novel two-stage structural modeling framework to understanding the bias in estimating the conditional treatment effect for 2SPS and 2SRI when the outcome is binary, count or time to event. Under this framework, we demonstrate that the bias in 2SPS and 2SRI estimators can be reframed to mirror the problem of omitted variables in non-linear models. We demonstrate that only when the influence of the unmeasured covariates on the treatment is proportional to their effect on the outcome that 2SRI estimates are generally unbiased for logit and Cox models. We also propose a novel dissimilarity metric to quantify the difference in these effects and demonstrate that with increasing dissimilarity, the bias of 2SRI increases in magnitude. We investigate these methods using simulation studies and data from an observational study of perinatal care for premature infants. Last, we extend Heller and Venkatraman's covariate adjusted conditional log rank test by using the propensity score method. We introduce the propensity score to balance the distribution of covariates among treatment groups and reduce the dimensionality of covariates to fit the conditional log rank test. We perform the simulation to assess the performance of this new method and covariates adjusted Cox model and score test.

Book Causal Modelling of Survival Data with Informative Noncompliance

Download or read book Causal Modelling of Survival Data with Informative Noncompliance written by Lang'O Taabu Odondi and published by . This book was released on 2011 with total page 278 pages. Available in PDF, EPUB and Kindle. Book excerpt: Noncompliance to treatment allocation is likely to complicate estimation of causal effects in clinical trials. The ubiquitous nonrandom phenomenon of noncompliance renders per-protocol and as- treated analyses or even simple regression adjustments for noncompliance inadequate for causal inference. For survival data, several specialist methods have been developed when noncompliance is related to risk. The Causal Accelerated Life Model (CALM) allows time-dependent departures from randomized treatment in either arm and relates each observed event time to a potential event time that would have been observed if the control treatment had been given throughout the trial. Alternatively, the structural Proportional Hazards (C-Prophet) model accounts for all-or-nothing noncompliance in the treatment arm only while the CHARM estimator allows time-dependent departures from randomized treatment by considering survival outcome as a sequence of binary outcomes to provide an 'approximate' overall hazard ratio estimate which is adjusted for compliance. The problem of efficacy estimation is compounded for two-active treatment trials (additional noncompliance) where the ITT estimate provides a biased estimator for the true hazard ratio even under homogeneous treatment effects assumption. Using plausible arm-specific predictors of compliance, principal stratification methods can be applied to obtain principal effects for each stratum. The present work applies the above methods to data from the Esprit trials study which was conducted to ascertain whether or not unopposed oestrogen (hormone replacement therapy - HRT) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We use statistically designed simulation studies to evaluate the performance of these methods in terms of bias and 95% confidence interval coverage. We also apply a principal stratification method to adjust for noncompliance in two treatment arms trial originally developed for binary data for survival analysis in terms of causal risk ratio. In a Bayesian framework, we apply the method to Esprit data to account for noncompliance in both treatment arms and estimate principal effects. We apply statistically designed simulation studies to evaluate the performance of the method in terms of bias in the causal effect estimates for each stratum. ITT analysis of the Esprit data showed the effects of taking HRT tablets was not statistically significantly different from placebo for both all cause mortality and myocardial reinfarction outcomes. Average compliance rate for HRT treatment was 43% and compliance rate decreased as the study progressed. CHARM and C-Prophet methods produced similar results but CALM performed best for Esprit: suggesting HRT would reduce risk of death by 50%. Simulation studies comparing the methods suggested that while both C-Prophet and CHARM methods performed equally well in terms of bias, the CALM method performed best in terms of both bias and 95% confidence interval coverage albeit with the largest RMSE. The principal stratification method failed for the Esprit study possibly due to the strong distribution assumption implicit in the method and lack of adequate compliance information in the data which produced large 95% credible intervals for the principal effect estimates. For moderate value of sensitivity parameter, principal stratification results suggested compliance with HRT tablets relative to placebo would reduce risk of mortality by 43% among the most compliant. Simulation studies on performance of this method showed narrower corresponding mean 95% credible intervals corresponding to the the causal risk ratio estimates for this subgroup compared to other strata. However, the results were sensitive to the unknown sensitivity parameter.

Book The Role of Propensity Score in Estimating Dose response Functions

Download or read book The Role of Propensity Score in Estimating Dose response Functions written by Guido Imbens and published by . This book was released on 1999 with total page 36 pages. Available in PDF, EPUB and Kindle. Book excerpt: Estimation of average treatment effects in observational, or non-experimental in pre-treatment variables. If the number of pre-treatment variables is large, and their distribution varies substantially with treatment status, standard adjustment methods such as covariance adjustment are often inadequate. Rosenbaum and Rubin (1983) propose an alternative method for adjusting for pre-treatment variables based on the propensity score conditional probability of receiving the treatment given pre-treatment variables. They demonstrate that adjusting solely for the propensity score removes all the bias associated with differences in pre-treatment variables between treatment and control groups. The Rosenbaum-Rubin proposals deal exclusively with the case where treatment takes on only two values. In this paper an extension of this methodology is proposed that allows for estimation of average causal effects with multi-valued treatments while maintaining the advantages of the propensity score approach.

Book Targeted Maximum Likelihood Estimation of Treatment Effects in Randomized Controlled Trials and Drug Safety Analysis

Download or read book Targeted Maximum Likelihood Estimation of Treatment Effects in Randomized Controlled Trials and Drug Safety Analysis written by Kelly Moore and published by . This book was released on 2009 with total page 238 pages. Available in PDF, EPUB and Kindle. Book excerpt: In most randomized controlled trials (RCTs), investigators typically rely on estimators of causal effects that do not exploit the information in the many baseline covariates that are routinely collected in addition to treatment and the outcome. Ignoring these covariates can lead to a significant loss is estimation efficiency and thus power. Statisticians have underscored the gain in efficiency that can be achieved from covariate adjustment in RCTs with a focus on problems involving linear models. Despite recent theoretical advances, there has been a reluctance to adjust for covariates based on two primary reasons; 1) covariate-adjusted estimates based on non-linear regression models have been shown to be less precise than unadjusted methods, and, 2) concern over the opportunity to manipulate the model selection process for covariate adjustment in order to obtain favorable results. This dissertation describes statistical approaches for covariate adjustment in RCTs using targeted maximum likelihood methodology for estimation of causal effects with binary and right-censored survival outcomes. Chapter 2 provides the targeted maximum likelihood approach to covariate adjustment in RCTs with binary outcomes, focusing on the estimation of the risk difference, relative risk and odds ratio. In such trials, investigators generally rely on the unadjusted estimate as the literature indicates that covariate-adjusted estimates based on logistic regression models are less efficient. The crucial step that has been missing when adjusting for covariates is that one must integrate/average the adjusted estimate over those covariates in order to obtain the population-level effect. Chapter 2 shows that covariate adjustment in RCTs using logistic regression models can be mapped, by averaging over the covariate(s), to obtain a fully robust and efficient estimator of the marginal effect, which equals a targeted maximum likelihood estimator. Simulation studies are provided that demonstrate that this targeted maximum likelihood method increases efficiency and power over the unadjusted method, particularly for smaller sample sizes, even when the regression model is misspecified. Chapter 3 applies the methodology presented in Chapter 3 to a sampled RCT dataset with a binary outcome to further explore the origin of the gains in efficiency and provide a criterion for determining whether a gain in efficiency can be achieved with covariate adjustment over the unadjusted method. This chapter demonstrates through simulation studies and the data analysis that not only is the relation between $R̂2$ and efficiency gain important, but also the presence of empirical confounding. Based on the results of these studies, a complete strategy for analyzing these type of data is formalized that provides a robust method for covariate adjustment while protecting investigators from misuse of these methods for obtaining favorable inference. Chapters 4 and 5 focus on estimation of causal effects with right-censored survival outcomes. Time-to-event outcomes are naturally subject to right-censoring due to early patient withdrawals. In chapter 4, the targeted maximum likelihood methodology is applied to the estimation of treatment specific survival at a fixed end-point in time. In chapter 5, the same methodology is applied to provide a competitor to the logrank test. The proposed covariate adjusted estimators, under no or uninformative censoring, do not require any additional parametric modeling assumptions, and under informative censoring, are consistent under consistent estimation of the censoring mechanism or the conditional hazard for survival. These targeted maximum likelihood estimators have two important advantages over the Kaplan-Meier and logrank approaches; 1) they exploit covariates to improve efficiency, and 2) they are consistent in the presence of informative censoring. These properties are demonstrated through simulation studies. Chapter 6 concludes with a summary of the preceding chapters and a discussion of future research directions.

Book Performance of the Propensity Score Methods Using Random Forest and Logistic Regression Approaches on the Treatment Effect Estimation in Observational Study

Download or read book Performance of the Propensity Score Methods Using Random Forest and Logistic Regression Approaches on the Treatment Effect Estimation in Observational Study written by and published by . This book was released on 2017 with total page 35 pages. Available in PDF, EPUB and Kindle. Book excerpt: The propensity score (PS) is the probability of a subject receiving the treatment given the baseline covariates. People with the same propensity score tend to have the same distribution of covariates. Thus, propensity score related methods can be used to eliminate the systematic difference between treatment and control group so that improving the causal inferences in the observational study. In this project, a series of simulation studies are conducted to evaluate two widely used propensity score methods, matching and inverse probability of treatment weighting (IPTW), on their relative ability to estimate the treatment effect from non-randomized trials. One observes that the random forest based propensity score weighting can yield more promising treatment effect estimates compared with other PS methods. Besides that, simulated samples are also implemented to compare the performance of several matching methods on the balancing the covariates. It turns out that logistic regression based propensity score matching can reduce most of systematic differences between treatment and control group although it is not the top performer in the causal effect estimation. Finally, we illustrate the application of the propensity score methods discussed in the paper with an empirical example.