EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Bayesian Variable Selection and Estimation

Download or read book Bayesian Variable Selection and Estimation written by Xiaofan Xu and published by . This book was released on 2014 with total page 76 pages. Available in PDF, EPUB and Kindle. Book excerpt: The paper considers the classical Bayesian variable selection problem and an important subproblem in which grouping information of predictors is available. We propose the Half Thresholding (HT) estimator for simultaneous variable selection and estimation with shrinkage priors. Under orthogonal design matrix, variable selection consistency and asymptotic distribution of HT estimators are investigated and the oracle property is established with Three Parameter Beta Mixture of Normals (TPBN) priors. We then revisit Bayesian group lasso and use spike and slab priors for variable selection at the group level. In the process, the connection of our model with penalized regression is demonstrated, and the role of posterior median for thresholding is pointed out. We show that the posterior median estimator has the oracle property for group variable selection and estimation under orthogonal design while the group lasso has suboptimal asymptotic estimation rate when variable selection consistency is achieved. Next we consider Bayesian sparse group lasso again with spike and slab priors to select variables both at the group level and also within the group, and develop the necessary algorithm for its implementation. We demonstrate via simulation that the posterior median estimator of our spike and slab models has excellent performance for both variable selection and estimation.

Book Handbook of Bayesian Variable Selection

Download or read book Handbook of Bayesian Variable Selection written by Mahlet G. Tadesse and published by CRC Press. This book was released on 2021-12-24 with total page 762 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian variable selection has experienced substantial developments over the past 30 years with the proliferation of large data sets. Identifying relevant variables to include in a model allows simpler interpretation, avoids overfitting and multicollinearity, and can provide insights into the mechanisms underlying an observed phenomenon. Variable selection is especially important when the number of potential predictors is substantially larger than the sample size and sparsity can reasonably be assumed. The Handbook of Bayesian Variable Selection provides a comprehensive review of theoretical, methodological and computational aspects of Bayesian methods for variable selection. The topics covered include spike-and-slab priors, continuous shrinkage priors, Bayes factors, Bayesian model averaging, partitioning methods, as well as variable selection in decision trees and edge selection in graphical models. The handbook targets graduate students and established researchers who seek to understand the latest developments in the field. It also provides a valuable reference for all interested in applying existing methods and/or pursuing methodological extensions. Features: Provides a comprehensive review of methods and applications of Bayesian variable selection. Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. Includes contributions by experts in the field. Supported by a website with code, data, and other supplementary material

Book Monte Carlo Methods in Bayesian Computation

Download or read book Monte Carlo Methods in Bayesian Computation written by Ming-Hui Chen and published by Springer Science & Business Media. This book was released on 2012-12-06 with total page 399 pages. Available in PDF, EPUB and Kindle. Book excerpt: Dealing with methods for sampling from posterior distributions and how to compute posterior quantities of interest using Markov chain Monte Carlo (MCMC) samples, this book addresses such topics as improving simulation accuracy, marginal posterior density estimation, estimation of normalizing constants, constrained parameter problems, highest posterior density interval calculations, computation of posterior modes, and posterior computations for proportional hazards models and Dirichlet process models. The authors also discuss model comparisons, including both nested and non-nested models, marginal likelihood methods, ratios of normalizing constants, Bayes factors, the Savage-Dickey density ratio, Stochastic Search Variable Selection, Bayesian Model Averaging, the reverse jump algorithm, and model adequacy using predictive and latent residual approaches. The book presents an equal mixture of theory and applications involving real data, and is intended as a graduate textbook or a reference book for a one-semester course at the advanced masters or Ph.D. level. It will also serve as a useful reference for applied or theoretical researchers as well as practitioners.

Book Nonparametric Regression Using Bayesian Variable Selection

Download or read book Nonparametric Regression Using Bayesian Variable Selection written by Michael Smith and published by . This book was released on 1994 with total page 29 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Economic Analysis of the Digital Economy

Download or read book Economic Analysis of the Digital Economy written by Avi Goldfarb and published by University of Chicago Press. This book was released on 2015-05-08 with total page 510 pages. Available in PDF, EPUB and Kindle. Book excerpt: There is a small and growing literature that explores the impact of digitization in a variety of contexts, but its economic consequences, surprisingly, remain poorly understood. This volume aims to set the agenda for research in the economics of digitization, with each chapter identifying a promising area of research. "Economics of Digitization "identifies urgent topics with research already underway that warrant further exploration from economists. In addition to the growing importance of digitization itself, digital technologies have some features that suggest that many well-studied economic models may not apply and, indeed, so many aspects of the digital economy throw normal economics in a loop. "Economics of Digitization" will be one of the first to focus on the economic implications of digitization and to bring together leading scholars in the economics of digitization to explore emerging research.

Book Variable Selection in High Dimensional Complex Data and Bayesian Estimation of Reduction Subspace

Download or read book Variable Selection in High Dimensional Complex Data and Bayesian Estimation of Reduction Subspace written by Moumita Karmakar and published by . This book was released on 2015 with total page 200 pages. Available in PDF, EPUB and Kindle. Book excerpt: Nowadays researchers are collecting large amount of data for which the number of predictors p is often too large to allow a thorough graphical visualization of the data for regression modeling. Commonly regression data are collected jointly on (Y, X) where X = (X1, ⋯, Xp)T is a random p-dimensional predictor and Y is a univariate response. In high dimensional setup, frequently encountered problems for variable selection or estimation in regression analyses are i) nonlinear relationship among predictors and response, ii) number of predictors much larger than sample size, iii) presence of sparsity.

Book Bayesian Variable Selection and Functional Data Analysis

Download or read book Bayesian Variable Selection and Functional Data Analysis written by Asish Kumar Banik and published by . This book was released on 2019 with total page 157 pages. Available in PDF, EPUB and Kindle. Book excerpt: High-dimensional statistics is one of the most studied topics in the field of statistics. The most interesting problem to arise in the last 15 years is variable selection or subset selection. Variable selection is a strong statistical tool that can be explored in functional data analysis. In the first part of this thesis, we implement a Bayesian variable selection method for automatic knot selection. We propose a spike-and-slab prior on knots and formulate a conjugate stochastic search variable selection for significant knots. The computation is substantially faster than existing knot selection methods, as we use Metropolis-Hastings algorithms and a Gibbs sampler for estimation. This work focuses on a single nonlinear covariate, modeled as regression splines. In the next stage, we study Bayesian variable selection in additive models with high-dimensional predictors. The selection of nonlinear functions in models is highly important in recent research, and the Bayesian method of selection has more advantages than contemporary frequentist methods. Chapter 2 examines Bayesian sparse group lasso theory based on spike-and-slab priors to determine its applicability for variable selection and function estimation in nonparametric additive models.The primary objective of Chapter 3 is to build a classification method using longitudinal volumetric magnetic resonance imaging (MRI) data from five regions of interest (ROIs). A functional data analysis method is used to handle the longitudinal measurement of ROIs, and the functional coefficients are later used in the classification models. We propose a P\\'olya-gamma augmentation method to classify normal controls and diseased patients based on functional MRI measurements. We obtain fast-posterior sampling by avoiding the slow and complicated Metropolis-Hastings algorithm. Our main motivation is to determine the important ROIs that have the highest separating power to classify our dichotomous response. We compare the sensitivity, specificity, and accuracy of the classification based on single ROIs and with various combinations of them. We obtain a sensitivity of over 85% and a specificity of around 90% for most of the combinations.Next, we work with Bayesian classification and selection methodology. The main goal of Chapter 4 is to employ longitudinal trajectories in a significant number of sub-regional brain volumetric MRI data as statistical predictors for Alzheimer's disease (AD) classification. We use logistic regression in a Bayesian framework that includes many functional predictors. The direct sampling of regression coefficients from the Bayesian logistic model is difficult due to its complicated likelihood function. In high-dimensional scenarios, the selection of predictors is paramount with the introduction of either spike-and-slab priors, non-local priors, or Horseshoe priors. We seek to avoid the complicated Metropolis-Hastings approach and to develop an easily implementable Gibbs sampler. In addition, the Bayesian estimation provides proper estimates of the model parameters, which are also useful for building inference. Another advantage of working with logistic regression is that it calculates the log of odds of relative risk for AD compared to normal control based on the selected longitudinal predictors, rather than simply classifying patients based on cross-sectional estimates. Ultimately, however, we combine approaches and use a probability threshold to classify individual patients. We employ 49 functional predictors consisting of volumetric estimates of brain sub-regions, chosen for their established clinical significance. Moreover, the use of spike-and-slab priors ensures that many redundant predictors are dropped from the model.Finally, we present a new approach of Bayesian model-based clustering for spatiotemporal data in chapter 5 . A simple linear mixed model (LME) derived from a functional model is used to model spatiotemporal cerebral white matter data extracted from healthy aging individuals. LME provides us with prior information for spatial covariance structure and brain segmentation based on white matter intensity. This motivates us to build stochastic model-based clustering to group voxels considering their longitudinal and location information. The cluster-specific random effect causes correlation among repeated measures. The problem of finding partitions is dealt with by imposing prior structure on cluster partitions in order to derive a stochastic objective function.

Book Bayesian Approaches to Parameter Estimation and Variable Selection for Misclassified Binary Data

Download or read book Bayesian Approaches to Parameter Estimation and Variable Selection for Misclassified Binary Data written by Daniel Beavers and published by . This book was released on 2009 with total page 109 pages. Available in PDF, EPUB and Kindle. Book excerpt: Binary misclassification is a common occurrence in statistical studies that, when ignored, induces bias in parameter estimates. The development of statistical methods to adjust for misclassification is necessary to allow for consistent estimation of parameters. In this work we develop a Bayesian framework for adjusting statistical models when fallible data collection methods produce misclassification of binary observations. In Chapter 2, we develop an approach for Bayesian variable selection for logistic regression models in which there exists a misclassified binary covariate. In this case, we require a subsample of gold standard validation data to estimate the sensitivity and specificity of the fallible classifier. In Chapter 3, we propose a Bayesian approach for the estimation of population prevalence of a biomarker in repeated diagnostic testing studies. In such situations, it is necessary to account for interindividual variability which we achieve through both the inclusion of random effects within logistic regression models and Bayesian hierarchical modeling. Our examples focus on applications for both reliability studies and biostatistical studies. Finally, we develop an approach to attempt to detect conditional dependence parameters between two fallible diagnostic tests for a binary logistic regression covariate in the absence of a gold standard test in Chapter 4. We compare the performance of the proposed procedure to previously published means assessing model fit.

Book Handbook of Bayesian Variable Selection

Download or read book Handbook of Bayesian Variable Selection written by Mahlet G. Tadesse and published by CRC Press. This book was released on 2021-12-24 with total page 491 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian variable selection has experienced substantial developments over the past 30 years with the proliferation of large data sets. Identifying relevant variables to include in a model allows simpler interpretation, avoids overfitting and multicollinearity, and can provide insights into the mechanisms underlying an observed phenomenon. Variable selection is especially important when the number of potential predictors is substantially larger than the sample size and sparsity can reasonably be assumed. The Handbook of Bayesian Variable Selection provides a comprehensive review of theoretical, methodological and computational aspects of Bayesian methods for variable selection. The topics covered include spike-and-slab priors, continuous shrinkage priors, Bayes factors, Bayesian model averaging, partitioning methods, as well as variable selection in decision trees and edge selection in graphical models. The handbook targets graduate students and established researchers who seek to understand the latest developments in the field. It also provides a valuable reference for all interested in applying existing methods and/or pursuing methodological extensions. Features: Provides a comprehensive review of methods and applications of Bayesian variable selection. Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. Includes contributions by experts in the field. Supported by a website with code, data, and other supplementary material

Book Jointness in Bayesian Variable Selection with Applications to Growth Regression

Download or read book Jointness in Bayesian Variable Selection with Applications to Growth Regression written by and published by World Bank Publications. This book was released on with total page 17 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book A Two stage Bayesian Variable Selection Method with the Extension of Lasso for Geo referenced Count Data

Download or read book A Two stage Bayesian Variable Selection Method with the Extension of Lasso for Geo referenced Count Data written by Yuqian Shen and published by . This book was released on 2019 with total page 59 pages. Available in PDF, EPUB and Kindle. Book excerpt: Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso are considered and compared as alternative methods. The simulation results indicate that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Book Bayesian Variable Selection with Applications to Neuroimaging Data

Download or read book Bayesian Variable Selection with Applications to Neuroimaging Data written by Shariq Mohammed and published by . This book was released on 2018 with total page 135 pages. Available in PDF, EPUB and Kindle. Book excerpt: In this dissertation, we discuss Bayesian modeling approaches for identifying brain regions that respond to certain stimulus and use them to classify subjects. We specifically deal with multi-subject electroencephalography (EEG) data where the responses are binary, and the covariates are matrices, with measurements taken for each subject at different locations across multiple time points. EEG data has a complex structure with both spatial and temporal attributes to it. We use a divide and conquer strategy to build multiple local models, that is, one model at each time point separately both, to avoid the curse of dimensionality and to achieve computational feasibility. Within each local model, we use Bayesian variable selection approaches to identify the locations which respond to a stimulus. We use continuous spike and slab prior, which has inherent variable selection properties. We initially demonstrate the local Bayesian modeling approach which is computationally inexpensive, where the estimation for each local modeling could be conducted in parallel. We use MCMC sampling procedures for parameter estimation. We also discuss a two-stage variable selection approach based on thresholding using the complexity parameter built within the model. A prediction strategy is built utilizing the temporal structure between local models. The spatial correlation is incorporated within the local Bayesian modeling to improve the inference. The temporal characteristic of the data is incorporated through the prior structure by learning from the local models estimated at previous time points. Variable selection is done via clustering of the locations based on their activation time. We then use a weighted prediction strategy to pool information from the local spatial models to make a final prediction. Since the EEG data has both spatial and temporal correlations acting simultaneously, we enrich our local Bayesian modeling by incorporating both correlations through a Kronecker product of the spatial and temporal correlation structures. We develop a highly scalable estimation approach to deal with the ultra-huge number of parameters in the model. We demonstrate the efficiency of estimation using the scalable algorithm by performing simulation studies. We also study the performance of these models through a case study on multi-subject EEG data.

Book Bayesian Variable Selection for Non Gaussian Data Using Global Local Shrinkage Priors and the Multivaraite Logit Beta Distribution

Download or read book Bayesian Variable Selection for Non Gaussian Data Using Global Local Shrinkage Priors and the Multivaraite Logit Beta Distribution written by Hongyu Wu and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Variable selection methods have become an important and growing problem in Bayesian analysis. The literature on Bayesian variable selection methods tends to be applied to a single response- type, and more typically, a continuous response-type, where it is assumed that the data is Gaus- sian/symmetric. In this dissertation, we develop a novel global-local shrinkage prior in non- symmetric settings and multiple response-types settings by combining the perspectives of global- local shrinkage and the conjugate multivaraite distribution. In Chapter 2, we focus on the problem of variable selection when the data is possibly non- symmetric continuous-valued. We propose modeling continuous-valued data and the coefficient vector with the multivariate logit-beta (MLB) distribution. To perform variable selection in a Bayesian context we make use of shrinkage global-local priors to enforce sparsity. Specifically, they can be defined as a Gaussian scale mixture of a global shrinkage parameter and a local shrinkage parameter for a regression coefficient. We provide a technical discussion that illustrates that our use of the multivariate logit-beta distribution under a P ́olya-Gamma augmentation scheme has an explicit connection to a well-known global-local shrinkage method (id est, the horseshoe prior) and extends it to possibly non-symmetric data. Moreover, our method can be implemented using an efficient block Gibbs sampler. Evidence of improvements in terms of mean squared error and variable selection as compared to the standard implementation of the horseshoe prior for skewed data settings is provided in simulated and real data examples. In Chapter 3, we direct our attention to the canonical variable selection problem in multiple response-types settings, where the observed dataset consists of multiple response-types (e.g., con- tinuous, count-valued, Bernoulli trials, et cetera). We propose the same global-local shrinkage prior in Chapter 2 but for multiple response-types datasets. The implementation of our Bayesian variable selection method to such data types is straightforward given the fact that the multivariate logit-beta prior is the conjugate prior for several members from the natural exponential family of distributions, which leads to the binomial/beta and negative binomial/beta hierarchical models. Our proposed model not just allows the estimation and selection of independent regression coefficients, but also those of shared regression coefficients across-response-types, which can be used to explicitly model dependence in spatial and time-series settings. An efficient block Gibbs sampler is developed, which is found to be effective in obtaining accurate estimates and variable selection results in simulation studies and an analysis of public health and financial costs from natural disasters in the U.S.

Book A Bayesian Variable Selection Method with Applications to Spatial Data

Download or read book A Bayesian Variable Selection Method with Applications to Spatial Data written by Xiahan Tang and published by . This book was released on 2017 with total page 94 pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis first describes the general idea behind Bayes Inference, various sampling methods based on Bayes theorem and many examples. Then a Bayes approach to model selection, called Stochastic Search Variable Selection (SSVS) is discussed. It was originally proposed by George and McCulloch (1993). In a normal regression model where the number of covariates is large, only a small subset tend to be significant most of the times. This Bayes procedure specifies a mixture prior for each of the unknown regression coefficient, the mixture prior was originally proposed by Geweke (1996). This mixture prior will be updated as data becomes available to generate a posterior distribution that assigns higher posterior probabilities to coefficients that are significant in explaining the response. Spatial modeling method is described in this thesis. Prior distribution for all unknown parameters and latent variables are specified. Simulated studies under different models have been implemented to test the efficiency of SSVS. A real dataset taken by choosing a small region from the Cape Floristic Region in South Africa is used to analyze the plants distribution in that region. The original multi-cateogory response is transformed into a presence and absence (binary) response for simpler analysis. First, SSVS is used on this dataset to select the subset of significant covariates. Then a spatial model is fitted using the chosen covariates and, post-estimation, predictive map of posterior probabilities of presence and absence are obtained for the study region. Posterior estimates for the true regression coefficients are also provided along with map for spatial random effects.

Book Model Selection

    Book Details:
  • Author : Parhasarathi Lahiri
  • Publisher : IMS
  • Release : 2001
  • ISBN : 9780940600522
  • Pages : 262 pages

Download or read book Model Selection written by Parhasarathi Lahiri and published by IMS. This book was released on 2001 with total page 262 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Bayesian Variable Selection Based on Test Statistics

Download or read book Bayesian Variable Selection Based on Test Statistics written by Andrea Malaguerra and published by . This book was released on 2012 with total page 61 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Bayesian Variable Selection for High Dimensional Data Analysis

Download or read book Bayesian Variable Selection for High Dimensional Data Analysis written by Yang Aijun and published by LAP Lambert Academic Publishing. This book was released on 2011-09 with total page 92 pages. Available in PDF, EPUB and Kindle. Book excerpt: In the practice of statistical modeling, it is often desirable to have an accurate predictive model. Modern data sets usually have a large number of predictors.Hence parsimony is especially an important issue. Best-subset selection is a conventional method of variable selection. Due to the large number of variables with relatively small sample size and severe collinearity among the variables, standard statistical methods for selecting relevant variables often face difficulties. Bayesian stochastic search variable selection has gained much empirical success in a variety of applications. This book, therefore, proposes a modified Bayesian stochastic variable selection approach for variable selection and two/multi-class classification based on a (multinomial) probit regression model.We demonstrate the performance of the approach via many real data. The results show that our approach selects smaller numbers of relevant variables and obtains competitive classification accuracy based on obtained results.