EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Bayesian Parameter Estimation and Variable Selection for Quantile Regression

Download or read book Bayesian Parameter Estimation and Variable Selection for Quantile Regression written by Craig Reed and published by . This book was released on 2011 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Prior Elicitation and Variable Selection for Bayesian Quantile Regression

Download or read book Prior Elicitation and Variable Selection for Bayesian Quantile Regression written by Rahim Jabbar Thaher Al-Hamzawi and published by . This book was released on 2013 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian subset selection suffers from three important difficulties: assigning priors over model space, assigning priors to all components of the regression coefficients vector given a specific model and Bayesian computational efficiency (Chen et al., 1999). These difficulties become more challenging in Bayesian quantile regression framework when one is interested in assigning priors that depend on different quantile levels. The objective of Bayesian quantile regression (BQR), which is a newly proposed tool, is to deal with unknown parameters and model uncertainty in quantile regression (QR). However, Bayesian subset selection in quantile regression models is usually a difficult issue due to the computational challenges and nonavailability of conjugate prior distributions that are dependent on the quantile level. These challenges are rarely addressed via either penalised likelihood function or stochastic search variable selection (SSVS). These methods typically use symmetric prior distributions for regression coefficients, such as the Gaussian and Laplace, which may be suitable for median regression. However, an extreme quantile regression should have different regression coefficients from the median regression, and thus the priors for quantile regression coefficients should depend on quantiles. This thesis focuses on three challenges: assigning standard quantile dependent prior distributions for the regression coefficients, assigning suitable quantile dependent priors over model space and achieving computational efficiency. The first of these challenges is studied in Chapter 2 in which a quantile dependent prior elicitation scheme is developed. In particular, an extension of the Zellners prior which allows for a conditional conjugate prior and quantile dependent prior on Bayesian quantile regression is proposed. The prior is generalised in Chapter 3 by introducing a ridge parameter to address important challenges that may arise in some applications, such as multicollinearity and overfitting problems. The proposed prior is also used in Chapter 4 for subset selection of the fixed and random coefficients in a linear mixedeffects QR model. In Chapter 5 we specify normal-exponential prior distributions for the regression coefficients which can provide adaptive shrinkage and represent an alternative model to the Bayesian Lasso quantile regression model. For the second challenge, we assign a quantile dependent prior over model space in Chapter 2. The prior is based on the percentage bend correlation which depends on the quantile level. This prior is novel and is used in Bayesian regression for the first time. For the third challenge of computational efficiency, Gibbs samplers are derived and setup to facilitate the computation of the proposed methods. In addition to the three major aforementioned challenges this thesis also addresses other important issues such as the regularisation in quantile regression and selecting both random and fixed effects in mixed quantile regression models.

Book Bayesian Approaches to Parameter Estimation and Variable Selection for Misclassified Binary Data

Download or read book Bayesian Approaches to Parameter Estimation and Variable Selection for Misclassified Binary Data written by Daniel Beavers and published by . This book was released on 2009 with total page 109 pages. Available in PDF, EPUB and Kindle. Book excerpt: Binary misclassification is a common occurrence in statistical studies that, when ignored, induces bias in parameter estimates. The development of statistical methods to adjust for misclassification is necessary to allow for consistent estimation of parameters. In this work we develop a Bayesian framework for adjusting statistical models when fallible data collection methods produce misclassification of binary observations. In Chapter 2, we develop an approach for Bayesian variable selection for logistic regression models in which there exists a misclassified binary covariate. In this case, we require a subsample of gold standard validation data to estimate the sensitivity and specificity of the fallible classifier. In Chapter 3, we propose a Bayesian approach for the estimation of population prevalence of a biomarker in repeated diagnostic testing studies. In such situations, it is necessary to account for interindividual variability which we achieve through both the inclusion of random effects within logistic regression models and Bayesian hierarchical modeling. Our examples focus on applications for both reliability studies and biostatistical studies. Finally, we develop an approach to attempt to detect conditional dependence parameters between two fallible diagnostic tests for a binary logistic regression covariate in the absence of a gold standard test in Chapter 4. We compare the performance of the proposed procedure to previously published means assessing model fit.

Book On Bayesian Regression Regularization Methods

Download or read book On Bayesian Regression Regularization Methods written by Qing Li and published by . This book was released on 2010 with total page 88 pages. Available in PDF, EPUB and Kindle. Book excerpt: Regression regularization methods are drawing increasing attention from statisticians for more frequent appearance of high-dimensional problems. Regression regularization achieves simultaneous parameter estimation and variable selection by penalizing the model parameters. In the first part of this thesis, we focus on the elastic net, which is a flexible regularization and variable selection method that uses a mixture of L1 and L2 penalties. It is particularly useful when there are much more predictors than the sample size. We proposes a Bayesian method to solve the elastic net model using a Gibbs sampler. While the marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian elastic net, the Bayesian elastic net has two major advantages. Firstly, as a Bayesian method, the distributional results on the estimates are straightforward, making the statistical inference easier. Secondly, it chooses the two penalty parameters simultaneously, avoiding the "double shrinkage problem" in the elastic net method. Real data examples and simulation studies show that the Bayesian elastic net behaves comparably in prediction accuracy but performs better in variable selection. The second part of this thesis investigates Bayesian regularization in quantile regression. Quantile regression is a method that models the relationship between the response variable and covariates through the population quantiles of the response variable. By proposing a hierarchical model framework, we give a generic treatment to a set of regularization approaches, including lasso, elastic net and group lasso. Gibbs samplers are derived for all cases. This is the first work to discuss regularized quantile regression with the elastic net penalty and the group lasso penalty. Both simulated and real data examples show that Bayesian regularized quantile regression methods often outperform quantile regression without regularization and their non-Bayesian counterparts with regularization.

Book Handbook of Quantile Regression

Download or read book Handbook of Quantile Regression written by Roger Koenker and published by CRC Press. This book was released on 2017-10-12 with total page 739 pages. Available in PDF, EPUB and Kindle. Book excerpt: Quantile regression constitutes an ensemble of statistical techniques intended to estimate and draw inferences about conditional quantile functions. Median regression, as introduced in the 18th century by Boscovich and Laplace, is a special case. In contrast to conventional mean regression that minimizes sums of squared residuals, median regression minimizes sums of absolute residuals; quantile regression simply replaces symmetric absolute loss by asymmetric linear loss. Since its introduction in the 1970's by Koenker and Bassett, quantile regression has been gradually extended to a wide variety of data analytic settings including time series, survival analysis, and longitudinal data. By focusing attention on local slices of the conditional distribution of response variables it is capable of providing a more complete, more nuanced view of heterogeneous covariate effects. Applications of quantile regression can now be found throughout the sciences, including astrophysics, chemistry, ecology, economics, finance, genomics, medicine, and meteorology. Software for quantile regression is now widely available in all the major statistical computing environments. The objective of this volume is to provide a comprehensive review of recent developments of quantile regression methodology illustrating its applicability in a wide range of scientific settings. The intended audience of the volume is researchers and graduate students across a diverse set of disciplines.

Book Handbook of Bayesian Variable Selection

Download or read book Handbook of Bayesian Variable Selection written by Mahlet G. Tadesse and published by CRC Press. This book was released on 2021-12-24 with total page 762 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian variable selection has experienced substantial developments over the past 30 years with the proliferation of large data sets. Identifying relevant variables to include in a model allows simpler interpretation, avoids overfitting and multicollinearity, and can provide insights into the mechanisms underlying an observed phenomenon. Variable selection is especially important when the number of potential predictors is substantially larger than the sample size and sparsity can reasonably be assumed. The Handbook of Bayesian Variable Selection provides a comprehensive review of theoretical, methodological and computational aspects of Bayesian methods for variable selection. The topics covered include spike-and-slab priors, continuous shrinkage priors, Bayes factors, Bayesian model averaging, partitioning methods, as well as variable selection in decision trees and edge selection in graphical models. The handbook targets graduate students and established researchers who seek to understand the latest developments in the field. It also provides a valuable reference for all interested in applying existing methods and/or pursuing methodological extensions. Features: Provides a comprehensive review of methods and applications of Bayesian variable selection. Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. Includes contributions by experts in the field. Supported by a website with code, data, and other supplementary material

Book Essays on Bayesian Time Series and Variable Selection

Download or read book Essays on Bayesian Time Series and Variable Selection written by Debkumar De and published by . This book was released on 2015 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Estimating model parameters in dynamic model continues to be challenge. In my dissertation, we have introduced a Stochastic Approximation based parameter estimation approach under Ensemble Kalman Filter set-up. Asymptotic properties of the resultant estimates are discussed here. We have compared our proposed method to current methods via simulation studies. We have demonstrated predictive performance of our proposed method on a large spatio-temporal data. In my other topic, we presented a method for simultaneous estimation of regression parameters and the covariance matrix, developed for a nonparametric Seemingly Unrelated Regression problem. This is a very flexible modeling technique that essentially performs a sparse high-dimensional multiple predictor(p), multiple responses(q) regression where the responses may be correlated. Such data appear abundantly in the fields of genomics, finance and econometrics. We illustrate and compare performances of our proposed techniques with previous analyses using both simulated and real multivariate data arising in econometrics and government. The electronic version of this dissertation is accessible from http://hdl.handle.net/1969.1/152793

Book Bayesian Approaches to Shrinkage and Sparse Estimation

Download or read book Bayesian Approaches to Shrinkage and Sparse Estimation written by Dimitris Korobilis and published by . This book was released on 2022-06-29 with total page 136 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian Approaches to Shrinkage and Sparse Estimation introduces the reader to the world of Bayesian model determination by surveying modern shrinkage and variable selection algorithms and methodologies. Bayesian inference is a natural probabilistic framework for quantifying uncertainty and learning about model parameters, and this feature is particularly important for inference in modern models of high dimensions and increased complexity. The authors begin with a linear regression setting in order to introduce various classes of priors that lead to shrinkage/sparse estimators of comparable value to popular penalized likelihood estimators (e.g. ridge, LASSO). They examine various methods of exact and approximate inference, and discuss their pros and cons. Finally, they explore how priors developed for the simple regression setting can be extended in a straightforward way to various classes of interesting econometric models. In particular, the following case-studies are considered that demonstrate application of Bayesian shrinkage and variable selection strategies to popular econometric contexts: i) vector autoregressive models; ii) factor models; iii) time-varying parameter regressions; iv) confounder selection in treatment effects models; and v) quantile regression models. A MATLAB package and an accompanying technical manual allows the reader to replicate many of the algorithms described in this review.

Book Quantile Regression

    Book Details:
  • Author : Cristina Davino
  • Publisher : John Wiley & Sons
  • Release : 2013-12-31
  • ISBN : 111997528X
  • Pages : 288 pages

Download or read book Quantile Regression written by Cristina Davino and published by John Wiley & Sons. This book was released on 2013-12-31 with total page 288 pages. Available in PDF, EPUB and Kindle. Book excerpt: A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensive description of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and followed by applications using real data. Quantile Regression: Presents a complete treatment of quantile regression methods, including, estimation, inference issues and application of methods. Delivers a balance between methodolgy and application Offers an overview of the recent developments in the quantile regression framework and why to use quantile regression in a variety of areas such as economics, finance and computing. Features a supporting website (www.wiley.com/go/quantile_regression) hosting datasets along with R, Stata and SAS software code. Researchers and PhD students in the field of statistics, economics, econometrics, social and environmental science and chemistry will benefit from this book.

Book Adjustment Uncertainty and Variable Selection in a Bayesian Context

Download or read book Adjustment Uncertainty and Variable Selection in a Bayesian Context written by Andrew James Dennis Henrey and published by . This book was released on 2012 with total page 72 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian Model Averaging (BMA) has previously been proposed as a solution to the variable selection problem when there is uncertainty about the true model in regression. Some recent research discusses the drawbacks; specifically, BMA can (and does) give biased parameter estimates in the presence of confounding. This is because BMA is optimized for prediction rather than parameter estimation. Though some newer research attempts to fix the issue of bias under confounding, none of the current algorithms handle either large data sets or survival outcomes. The Approximate Two-phase Bayesian Adjustment for Confounding (ATBAC) algorithm proposed in this paper does both, and we use it on a large medical cohort study called THIN (The Health Improvement Network) to estimate the effect of statins on risk of stroke. We use simulation and some analytical techniques to discuss two main topics in this paper. Firstly, we demonstrate the ability of ATBAC to perform unbiased parameter estimation on survival data while accounting for model uncertainty. Secondly, we discuss when it is, and isn't, helpful to use variable selection techniques in the first place, and find that in some large data sets variable selection for parameter estimation is unnecessary.

Book Consistent Bayesian Learning for Neural Network Models

Download or read book Consistent Bayesian Learning for Neural Network Models written by Sanket Rajendra Jantre and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian framework adapted for neural network learning, Bayesian neural networks, have received widespread attention and successfully applied to various applications. Bayesian inference for neural networks promises improved predictions with reliable uncertainty estimates, robustness, principled model comparison, and decision-making under uncertainty. In this dissertation, we propose novel theoretically consistent Bayesian neural network models and provide their computationally efficient posterior inference algorithms.In Chapter 2, we introduce a Bayesian quantile regression neural network assuming an asymmetric Laplace distribution for the response variable. The normal-exponential mixturere presentation of the asymmetric Laplace density is utilized to derive the Gibbs sampling coupled with Metropolis-Hastings algorithm for the posterior inference. We establish the posterior consistency under a misspecified asymmetric Laplace density model. We illustrate the proposed method with simulation studies and real data examples.Traditional Bayesian learning methods are limited by their scalability to large data and feature spaces due to the expensive inference approaches, however recent developments in variational inference techniques and sparse learning have brought renewed interest to this area. Sparse deep neural networks have proven to be efficient for predictive model building in large-scale studies. Although several works have studied theoretical and numerical properties of sparse neural architectures, they have primarily focused on the edge selection.In Chapter 3, we propose a sparse Bayesian technique using spike-and-slab Gaussian prior to allow for automatic node selection. The spike-and-slab prior alleviates the need of an ad-hoc thresholding rule for pruning. In addition, we adopt a variational Bayes approach to circumvent the computational challenges of traditional Markov chain Monte Carlo implementation. In the context of node selection, we establish the variational posterior consistency together with the layer-wise characterization of prior inclusion probabilities. We empirically demonstrate that our proposed approach outperforms the edge selection method in computational complexity with similar or better predictive performance.The structured sparsity (e.g. node sparsity) in deep neural networks provides low latency inference, higher data throughput, and reduced energy consumption. Alternatively, there is a vast albeit growing literature demonstrating shrinkage efficiency and theoretical optimality in linear models of two sparse parameter estimation techniques: lasso and horseshoe. In Chapter 4, we propose structurally sparse Bayesian neural networks which systematically prune excessive nodes with (i) Spike-and-Slab Group Lasso, and (ii) Spike-and-Slab Group Horseshoe priors, and develop computationally tractable variational inference We demonstrate the competitive performance of our proposed models compared to the Bayesian baseline models in prediction accuracy, model compression, and inference latency.Deep neural network ensembles that appeal to model diversity have been used successfully to improve predictive performance and model robustness in several applications. However, most ensembling techniques require multiple parallel and costly evaluations and have been proposed primarily with deterministic models. In Chapter 5, we propose sequential ensembling of dynamic Bayesian neural subnetworks to generate diverse ensemble in a single forward pass. The ensembling strategy consists of an exploration phase that finds high-performing regions of the parameter space and multiple exploitation phases that effectively exploit the compactness of the sparse model to quickly converge to different minima in the energy landscape corresponding to high-performing subnetworks yielding diverse ensembles. We empirically demonstrate that our proposed approach surpasses the baselines of the dense frequentist and Bayesian ensemble models in prediction accuracy, uncertainty estimation, and out-of-distribution robustness. Furthermore, we found that our approach produced the most diverse ensembles compared to the approaches with a single forward pass and even compared to the approaches with multiple forward passes in some cases.

Book Nonparametric Regression Using Bayesian Variable Selection

Download or read book Nonparametric Regression Using Bayesian Variable Selection written by Michael Smith and published by . This book was released on 1994 with total page 29 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Generalized Linear Models for Bounded and Limited Quantitative Variables

Download or read book Generalized Linear Models for Bounded and Limited Quantitative Variables written by Michael Smithson and published by SAGE Publications. This book was released on 2019-09-09 with total page 136 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book introduces researchers and students to the concepts and generalized linear models for analyzing quantitative random variables that have one or more bounds. Examples of bounded variables include the percentage of a population eligible to vote (bounded from 0 to 100), or reaction time in milliseconds (bounded below by 0). The human sciences deal in many variables that are bounded. Ignoring bounds can result in misestimation and improper statistical inference. Michael Smithson and Yiyun Shou's book brings together material on the analysis of limited and bounded variables that is scattered across the literature in several disciplines, and presents it in a style that is both more accessible and up-to-date. The authors provide worked examples in each chapter using real datasets from a variety of disciplines. The software used for the examples include R, SAS, and Stata. The data, software code, and detailed explanations of the example models are available on an accompanying website.

Book Jointness in Bayesian Variable Selection with Applications to Growth Regression

Download or read book Jointness in Bayesian Variable Selection with Applications to Growth Regression written by and published by World Bank Publications. This book was released on with total page 17 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Bayesian Inference

    Book Details:
  • Author : Hanns L. Harney
  • Publisher : Springer Science & Business Media
  • Release : 2013-03-14
  • ISBN : 366206006X
  • Pages : 275 pages

Download or read book Bayesian Inference written by Hanns L. Harney and published by Springer Science & Business Media. This book was released on 2013-03-14 with total page 275 pages. Available in PDF, EPUB and Kindle. Book excerpt: Solving a longstanding problem in the physical sciences, this text and reference generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. The text is written at introductory level, with many examples and exercises.

Book Bayesian Hierarchical Models

Download or read book Bayesian Hierarchical Models written by Peter D. Congdon and published by CRC Press. This book was released on 2019-09-16 with total page 580 pages. Available in PDF, EPUB and Kindle. Book excerpt: An intermediate-level treatment of Bayesian hierarchical models and their applications, this book demonstrates the advantages of a Bayesian approach to data sets involving inferences for collections of related units or variables, and in methods where parameters can be treated as random collections. Through illustrative data analysis and attention to statistical computing, this book facilitates practical implementation of Bayesian hierarchical methods. The new edition is a revision of the book Applied Bayesian Hierarchical Methods. It maintains a focus on applied modelling and data analysis, but now using entirely R-based Bayesian computing options. It has been updated with a new chapter on regression for causal effects, and one on computing options and strategies. This latter chapter is particularly important, due to recent advances in Bayesian computing and estimation, including the development of rjags and rstan. It also features updates throughout with new examples. The examples exploit and illustrate the broader advantages of the R computing environment, while allowing readers to explore alternative likelihood assumptions, regression structures, and assumptions on prior densities. Features: Provides a comprehensive and accessible overview of applied Bayesian hierarchical modelling Includes many real data examples to illustrate different modelling topics R code (based on rjags, jagsUI, R2OpenBUGS, and rstan) is integrated into the book, emphasizing implementation Software options and coding principles are introduced in new chapter on computing Programs and data sets available on the book’s website

Book Handbook of Bayesian Variable Selection

Download or read book Handbook of Bayesian Variable Selection written by Mahlet G. Tadesse and published by CRC Press. This book was released on 2021-12-24 with total page 491 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian variable selection has experienced substantial developments over the past 30 years with the proliferation of large data sets. Identifying relevant variables to include in a model allows simpler interpretation, avoids overfitting and multicollinearity, and can provide insights into the mechanisms underlying an observed phenomenon. Variable selection is especially important when the number of potential predictors is substantially larger than the sample size and sparsity can reasonably be assumed. The Handbook of Bayesian Variable Selection provides a comprehensive review of theoretical, methodological and computational aspects of Bayesian methods for variable selection. The topics covered include spike-and-slab priors, continuous shrinkage priors, Bayes factors, Bayesian model averaging, partitioning methods, as well as variable selection in decision trees and edge selection in graphical models. The handbook targets graduate students and established researchers who seek to understand the latest developments in the field. It also provides a valuable reference for all interested in applying existing methods and/or pursuing methodological extensions. Features: Provides a comprehensive review of methods and applications of Bayesian variable selection. Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. Includes contributions by experts in the field. Supported by a website with code, data, and other supplementary material