EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Large Covariance and Autocovariance Matrices

Download or read book Large Covariance and Autocovariance Matrices written by Arup Bose and published by CRC Press. This book was released on 2018-07-03 with total page 359 pages. Available in PDF, EPUB and Kindle. Book excerpt: Large Covariance and Autocovariance Matrices brings together a collection of recent results on sample covariance and autocovariance matrices in high-dimensional models and novel ideas on how to use them for statistical inference in one or more high-dimensional time series models. The prerequisites include knowledge of elementary multivariate analysis, basic time series analysis and basic results in stochastic convergence. Part I is on different methods of estimation of large covariance matrices and auto-covariance matrices and properties of these estimators. Part II covers the relevant material on random matrix theory and non-commutative probability. Part III provides results on limit spectra and asymptotic normality of traces of symmetric matrix polynomial functions of sample auto-covariance matrices in high-dimensional linear time series models. These are used to develop graphical and significance tests for different hypotheses involving one or more independent high-dimensional linear time series. The book should be of interest to people in econometrics and statistics (large covariance matrices and high-dimensional time series), mathematics (random matrices and free probability) and computer science (wireless communication). Parts of it can be used in post-graduate courses on high-dimensional statistical inference, high-dimensional random matrices and high-dimensional time series models. It should be particularly attractive to researchers developing statistical methods in high-dimensional time series models. Arup Bose is a professor at the Indian Statistical Institute, Kolkata, India. He is a distinguished researcher in mathematical statistics and has been working in high-dimensional random matrices for the last fifteen years. He has been editor of Sankhyā for several years and has been on the editorial board of several other journals. He is a Fellow of the Institute of Mathematical Statistics, USA and all three national science academies of India, as well as the recipient of the S.S. Bhatnagar Award and the C.R. Rao Award. His first book Patterned Random Matrices was also published by Chapman & Hall. He has a forthcoming graduate text U-statistics, M-estimates and Resampling (with Snigdhansu Chatterjee) to be published by Hindustan Book Agency. Monika Bhattacharjee is a post-doctoral fellow at the Informatics Institute, University of Florida. After graduating from St. Xavier's College, Kolkata, she obtained her master’s in 2012 and PhD in 2016 from the Indian Statistical Institute. Her thesis in high-dimensional covariance and auto-covariance matrices, written under the supervision of Dr. Bose, has received high acclaim.

Book Large Covariance and Autocovariance Matrices

Download or read book Large Covariance and Autocovariance Matrices written by Arup Bose and published by CRC Press. This book was released on 2018-07-03 with total page 272 pages. Available in PDF, EPUB and Kindle. Book excerpt: Large Covariance and Autocovariance Matrices brings together a collection of recent results on sample covariance and autocovariance matrices in high-dimensional models and novel ideas on how to use them for statistical inference in one or more high-dimensional time series models. The prerequisites include knowledge of elementary multivariate analysis, basic time series analysis and basic results in stochastic convergence. Part I is on different methods of estimation of large covariance matrices and auto-covariance matrices and properties of these estimators. Part II covers the relevant material on random matrix theory and non-commutative probability. Part III provides results on limit spectra and asymptotic normality of traces of symmetric matrix polynomial functions of sample auto-covariance matrices in high-dimensional linear time series models. These are used to develop graphical and significance tests for different hypotheses involving one or more independent high-dimensional linear time series. The book should be of interest to people in econometrics and statistics (large covariance matrices and high-dimensional time series), mathematics (random matrices and free probability) and computer science (wireless communication). Parts of it can be used in post-graduate courses on high-dimensional statistical inference, high-dimensional random matrices and high-dimensional time series models. It should be particularly attractive to researchers developing statistical methods in high-dimensional time series models. Arup Bose is a professor at the Indian Statistical Institute, Kolkata, India. He is a distinguished researcher in mathematical statistics and has been working in high-dimensional random matrices for the last fifteen years. He has been editor of Sankhyā for several years and has been on the editorial board of several other journals. He is a Fellow of the Institute of Mathematical Statistics, USA and all three national science academies of India, as well as the recipient of the S.S. Bhatnagar Award and the C.R. Rao Award. His first book Patterned Random Matrices was also published by Chapman & Hall. He has a forthcoming graduate text U-statistics, M-estimates and Resampling (with Snigdhansu Chatterjee) to be published by Hindustan Book Agency. Monika Bhattacharjee is a post-doctoral fellow at the Informatics Institute, University of Florida. After graduating from St. Xavier's College, Kolkata, she obtained her master’s in 2012 and PhD in 2016 from the Indian Statistical Institute. Her thesis in high-dimensional covariance and auto-covariance matrices, written under the supervision of Dr. Bose, has received high acclaim.

Book High Dimensional Covariance Matrix Estimation

Download or read book High Dimensional Covariance Matrix Estimation written by Aygul Zagidullina and published by Springer Nature. This book was released on 2021-10-29 with total page 123 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents covariance matrix estimation and related aspects of random matrix theory. It focuses on the sample covariance matrix estimator and provides a holistic description of its properties under two asymptotic regimes: the traditional one, and the high-dimensional regime that better fits the big data context. It draws attention to the deficiencies of standard statistical tools when used in the high-dimensional setting, and introduces the basic concepts and major results related to spectral statistics and random matrix theory under high-dimensional asymptotics in an understandable and reader-friendly way. The aim of this book is to inspire applied statisticians, econometricians, and machine learning practitioners who analyze high-dimensional data to apply the recent developments in their work.

Book High Dimensional Covariance Estimation

Download or read book High Dimensional Covariance Estimation written by Mohsen Pourahmadi and published by John Wiley & Sons. This book was released on 2013-06-24 with total page 204 pages. Available in PDF, EPUB and Kindle. Book excerpt: Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and machine learning. Recently, the classical sample covariance methodologies have been modified and improved upon to meet the needs of statisticians and researchers dealing with large correlated datasets. High-Dimensional Covariance Estimation focuses on the methodologies based on shrinkage, thresholding, and penalized likelihood with applications to Gaussian graphical models, prediction, and mean-variance portfolio management. The book relies heavily on regression-based ideas and interpretations to connect and unify many existing methods and algorithms for the task. High-Dimensional Covariance Estimation features chapters on: Data, Sparsity, and Regularization Regularizing the Eigenstructure Banding, Tapering, and Thresholding Covariance Matrices Sparse Gaussian Graphical Models Multivariate Regression The book is an ideal resource for researchers in statistics, mathematics, business and economics, computer sciences, and engineering, as well as a useful text or supplement for graduate-level courses in multivariate analysis, covariance estimation, statistical learning, and high-dimensional data analysis.

Book A Novel Two Stage Adaptive Method for Estimating Large Covariance and Precision Matrices

Download or read book A Novel Two Stage Adaptive Method for Estimating Large Covariance and Precision Matrices written by Rajanikanth Rajendran and published by . This book was released on 2019 with total page 73 pages. Available in PDF, EPUB and Kindle. Book excerpt: Estimating large covariance and precision (inverse covariance) matrices has become increasingly important in high dimensional statistics because of its wide applications. The estimation problem is challenging not only theoretically due to the constraint of its positive definiteness, but also computationally because of the curse of dimensionality. Many types of estimators have been proposed such as thresholding under the sparsity assumption of the target matrix, banding and tapering the sample covariance matrix. However, these estimators are not always guaranteed to be positive-definite, especially, for finite samples, and the sparsity assumption is rather restrictive. We propose a novel two-stage adaptive method based on the Cholesky decomposition of a general covariance matrix. By banding the precision matrix in the first stage and adapting the estimates to the second stage estimation, we develop a computationally efficient and statistically accurate method for estimating high dimensional precision matrices. We demonstrate the finite-sample performance of the proposed method by simulations from autoregressive, moving average, and long-range dependent processes. We illustrate its wide applicability by analyzing financial data such S&P 500 index and IBM stock returns, and electric power consumption of individual households. The theoretical properties of the proposed method are also investigated within a large class of covariance matrices.

Book Estimation of Autocovariance Matrices for Infinite Dimensional Vector Linear Process

Download or read book Estimation of Autocovariance Matrices for Infinite Dimensional Vector Linear Process written by Monika Bhattacharjee and published by . This book was released on 2014 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Consider an infinite dimensional vector linear process. Under suitable assumptions on the parameter space, we provide consistent estimators of the autocovariance matrices. In particular, under causality, this includes the infinite-dimensional vector autoregressive (IVAR) process. In that case, we obtain consistent estimators for the parameter matrices. An explicit expression for the estimators is obtained for IVAR(1), under a fairly realistic parameter space. We also show that under some mild restrictions, the consistent estimator of the marginal large dimensional variance-covariance matrix has the same convergence rate as that in case of i.i.d.)samples.

Book Patterned Random Matrices

Download or read book Patterned Random Matrices written by Arup Bose and published by CRC Press. This book was released on 2018-05-23 with total page 269 pages. Available in PDF, EPUB and Kindle. Book excerpt: Large dimensional random matrices (LDRM) with specific patterns arise in econometrics, computer science, mathematics, physics, and statistics. This book provides an easy initiation to LDRM. Through a unified approach, we investigate the existence and properties of the limiting spectral distribution (LSD) of different patterned random matrices as the dimension grows. The main ingredients are the method of moments and normal approximation with rudimentary combinatorics for support. Some elementary results from matrix theory are also used. By stretching the moment arguments, we also have a brush with the intriguing but difficult concepts of joint convergence of sequences of random matrices and its ramifications. This book covers the Wigner matrix, the sample covariance matrix, the Toeplitz matrix, the Hankel matrix, the sample autocovariance matrix and the k-Circulant matrices. Quick and simple proofs of their LSDs are provided and it is shown how the semi-circle law and the March enko-Pastur law arise as the LSDs of the first two matrices. Extending the basic approach, we also establish interesting limits for some triangular matrices, band matrices, balanced matrices, and the sample autocovariance matrix. We also study the joint convergence of several patterned matrices, and show that independent Wigner matrices converge jointly and are asymptotically free of other patterned matrices. Arup Bose is a Professor at the Indian Statistical Institute, Kolkata, India. He is a distinguished researcher in Mathematical Statistics and has been working in high-dimensional random matrices for the last fifteen years. He has been the Editor of Sankyhā for several years and has been on the editorial board of several other journals. He is a Fellow of the Institute of Mathematical Statistics, USA and all three national science academies of India, as well as the recipient of the S.S. Bhatnagar Award and the C.R. Rao Award. His forthcoming books are the monograph, Large Covariance and Autocovariance Matrices (with Monika Bhattacharjee), to be published by Chapman & Hall/CRC Press, and a graduate text, U-statistics, M-estimates and Resampling (with Snigdhansu Chatterjee), to be published by Hindustan Book Agency.

Book Probability and Stochastic Processes

Download or read book Probability and Stochastic Processes written by Siva Athreya and published by Springer Nature. This book was released on with total page 207 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Shrinkage Estimation of Large Covariance Matrices

Download or read book Shrinkage Estimation of Large Covariance Matrices written by Olivier Ledoit and published by . This book was released on 2019 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Under rotation-equivariant decision theory, sample covariance matrix eigenvalues can be optimally shrunk by recombining sample eigenvectors with a (potentially nonlinear) function of the unobservable population covariance matrix. The optimal shape of this function reflects the loss/risk that is to be minimized. We introduce a broad family of covariance matrix estimators that can handle all regular functional transformations of the population covariance matrix under large-dimensional asymptotics. We solve the problem of optimal covariance matrix estimation under a variety of loss functions motivated by statistical precedent, probability theory, and differential geometry. The key statistical ingredient of our nonlinear shrinkage methodology is a new estimator of the angle between sample and population eigenvectors, without making strong assumptions on the population eigenvalues. We also compare our methodology to two simpler ones from the literature, linear shrinkage and shrinkage based on the spiked covariance model, via both Monte Carlo simulations and an empirical application.

Book High Dimensional Covariance Estimation

Download or read book High Dimensional Covariance Estimation written by Mohsen Pourahmadi and published by John Wiley & Sons. This book was released on 2013-05-28 with total page 204 pages. Available in PDF, EPUB and Kindle. Book excerpt: Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and machine learning. Recently, the classical sample covariance methodologies have been modified and improved upon to meet the needs of statisticians and researchers dealing with large correlated datasets. High-Dimensional Covariance Estimation focuses on the methodologies based on shrinkage, thresholding, and penalized likelihood with applications to Gaussian graphical models, prediction, and mean-variance portfolio management. The book relies heavily on regression-based ideas and interpretations to connect and unify many existing methods and algorithms for the task. High-Dimensional Covariance Estimation features chapters on: Data, Sparsity, and Regularization Regularizing the Eigenstructure Banding, Tapering, and Thresholding Covariance Matrices Sparse Gaussian Graphical Models Multivariate Regression The book is an ideal resource for researchers in statistics, mathematics, business and economics, computer sciences, and engineering, as well as a useful text or supplement for graduate-level courses in multivariate analysis, covariance estimation, statistical learning, and high-dimensional data analysis.

Book Patterned Random Matrices

Download or read book Patterned Random Matrices written by Arup Bose and published by CRC Press. This book was released on 2018-05-23 with total page 329 pages. Available in PDF, EPUB and Kindle. Book excerpt: Large dimensional random matrices (LDRM) with specific patterns arise in econometrics, computer science, mathematics, physics, and statistics. This book provides an easy initiation to LDRM. Through a unified approach, we investigate the existence and properties of the limiting spectral distribution (LSD) of different patterned random matrices as the dimension grows. The main ingredients are the method of moments and normal approximation with rudimentary combinatorics for support. Some elementary results from matrix theory are also used. By stretching the moment arguments, we also have a brush with the intriguing but difficult concepts of joint convergence of sequences of random matrices and its ramifications. This book covers the Wigner matrix, the sample covariance matrix, the Toeplitz matrix, the Hankel matrix, the sample autocovariance matrix and the k-Circulant matrices. Quick and simple proofs of their LSDs are provided and it is shown how the semi-circle law and the March enko-Pastur law arise as the LSDs of the first two matrices. Extending the basic approach, we also establish interesting limits for some triangular matrices, band matrices, balanced matrices, and the sample autocovariance matrix. We also study the joint convergence of several patterned matrices, and show that independent Wigner matrices converge jointly and are asymptotically free of other patterned matrices. Arup Bose is a Professor at the Indian Statistical Institute, Kolkata, India. He is a distinguished researcher in Mathematical Statistics and has been working in high-dimensional random matrices for the last fifteen years. He has been the Editor of Sankyhā for several years and has been on the editorial board of several other journals. He is a Fellow of the Institute of Mathematical Statistics, USA and all three national science academies of India, as well as the recipient of the S.S. Bhatnagar Award and the C.R. Rao Award. His forthcoming books are the monograph, Large Covariance and Autocovariance Matrices (with Monika Bhattacharjee), to be published by Chapman & Hall/CRC Press, and a graduate text, U-statistics, M-estimates and Resampling (with Snigdhansu Chatterjee), to be published by Hindustan Book Agency.

Book Quadratic Shrinkage for Large Covariance Matrices

Download or read book Quadratic Shrinkage for Large Covariance Matrices written by Olivier Ledoit and published by . This book was released on 2019 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: This paper constructs a new estimator for large covariance matrices by drawing a bridge between the classic Stein (1975) estimator in finite samples and recent progress under large-dimensional asymptotics. Our formula is quadratic: it has two shrinkage targets weighted by quadratic functions of the concentration ratio (matrix dimension divided by sample size, a standard measure of the curse of dimensionality). The first target dominates mid-level concentrations and the second one higher levels. This extra degree of freedom enables us to outperform linear shrinkage when optimal shrinkage is not linear (which is the general case). Both of our targets are based on what we term the "Stein shrinker", a local attraction operator that pulls sample covariance matrix eigenvalues towards their nearest neighbors, but whose force diminishes with distance, like gravitation. We prove that no cubic or higher- order nonlinearities beat quadratic with respect to Frobenius loss under large-dimensional asymptotics. Non-normality and the case where the matrix dimension exceeds the sample size are accommodated. Monte Carlo simulations confirm state-of-the-art performance in terms of accuracy, speed, and scalability.

Book Random Matrices and Non Commutative Probability

Download or read book Random Matrices and Non Commutative Probability written by Arup Bose and published by CRC Press. This book was released on 2021-10-26 with total page 287 pages. Available in PDF, EPUB and Kindle. Book excerpt: This is an introductory book on Non-Commutative Probability or Free Probability and Large Dimensional Random Matrices. Basic concepts of free probability are introduced by analogy with classical probability in a lucid and quick manner. It then develops the results on the convergence of large dimensional random matrices, with a special focus on the interesting connections to free probability. The book assumes almost no prerequisite for the most part. However, familiarity with the basic convergence concepts in probability and a bit of mathematical maturity will be helpful. Combinatorial properties of non-crossing partitions, including the Möbius function play a central role in introducing free probability. Free independence is defined via free cumulants in analogy with the way classical independence can be defined via classical cumulants. Free cumulants are introduced through the Möbius function. Free product probability spaces are constructed using free cumulants. Marginal and joint tracial convergence of large dimensional random matrices such as the Wigner, elliptic, sample covariance, cross-covariance, Toeplitz, Circulant and Hankel are discussed. Convergence of the empirical spectral distribution is discussed for symmetric matrices. Asymptotic freeness results for random matrices, including some recent ones, are discussed in detail. These clarify the structure of the limits for joint convergence of random matrices. Asymptotic freeness of independent sample covariance matrices is also demonstrated via embedding into Wigner matrices. Exercises, at advanced undergraduate and graduate level, are provided in each chapter.

Book Application of Large Random Matrices to Multivariate Time Series Analysis

Download or read book Application of Large Random Matrices to Multivariate Time Series Analysis written by Daria Tieplova and published by . This book was released on 2020 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: A number of recent works proposed to use large random matrix theory in the context of high-dimensional statistical signal processing, traditionally modeled by a double asymptotic regime in which the dimension of the time series and the sample size both grow towards infinity. These contributions essentially addressed detection or estimation schemes depending on functionals of the sample covariance matrix of the observation. However, fundamental high-dimensional time series problems depend on matrices that are more complicated than the sample covariance matrix. The purpose of the present PhD is to study the behaviour of the singular values of 2 kinds of structured large random matrices, and to use the corresponding results to address an important statistical problem. More specifically, the observation (y_n)_{nin Z} is supposed to be a noisy version of a M-dimensional time series (u_n)_{nin Z} with rational spectrum that has some particular low rank structure, the additive noise (v_n)_{nin Z} being an independent identically distributed sequence of complex Gaussian vectors with unknown covariance matrix. An important statistical problem is the estimation of the minimal dimension P of the state space representations of u from N samples y_1,.., y_N. If L is any integer larger than P, the traditional approaches are based on the observation that P coincides with the rank of the autocovariance matrix R^L_{f|p} between the ML-dimensional random vectors (y_{n+L}^T,..,y_{n+2L-1}^T)^T and (y_{n}^T,.., y_{n+L-1}^T)^T, as well as with the number of non zero singular values of the normalized matrix C^L = (R^L)^{-1/2}R^L_{f|p} (R^L)^{-1/2} where R^L represents the covariance matrix of the above ML-dimensional vectors. In the low-dimensional regime where N->+infty while M and L are fixed, the matrices R^L_{f|p} and C^L can be consistently estimated by their empirical counterparts hat{R}^L_{f|p} and hat{C}^L, and P can be evaluated from the largest singular values of hat{R}^L_{f|p} and hat{C}^L. If however M and N->+infty in such a way that ML/N converges towards 0 c*

Book Large Dimensional Covariance Matrix Estimation with Decomposition based Regularization

Download or read book Large Dimensional Covariance Matrix Estimation with Decomposition based Regularization written by and published by . This book was released on 2014 with total page 129 pages. Available in PDF, EPUB and Kindle. Book excerpt: Estimation of population covariance matrices from samples of multivariate data is of great importance. When the dimension of a covariance matrix is large but the sample size is limited, it is well known that the sample covariance matrix is dissatisfactory. However, the improvement of covariance matrix estimation is not straightforward, mainly because of the constraint of positive definiteness. This thesis work considers decomposition-based methods to circumvent this primary difficulty. Two ways of covariance matrix estimation with regularization on factor matrices from decompositions are included. One approach replies on the modified Cholesky decomposition from Pourahmadi, and the other technique, matrix exponential or matrix logarithm, is closely related to the spectral decomposition. We explore the usage of covariance matrix estimation by imposing L1 regularization on the entries of Cholesky factor matrices, and find the estimates from this approach are not sensitive to the orders of variables. A given order of variables is the prerequisite in the application of the modified Cholesky decomposition, while in practice, information on the order of variables is often unknown. We take advantage of this property to remove the requirement of order information, and propose an order-invariant covariance matrix estimate by refining estimates corresponding to different orders of variables. The refinement not only guarantees the positive definiteness of the estimated covariance matrix, but also is applicable in general situations without the order of variables being pre-specified. The refined estimate can be approximated by only combining a moderate number of representative estimates. Numerical simulations are conducted to evaluate the performance of the proposed method in comparison with several other estimates. By applying the matrix exponential technique, the problem of estimating positive definite covariance matrices is transformed into a problem of estimating symmetric matrices. There are close connections between covariance matrices and their logarithm matrices, and thus, pursing a matrix logarithm with certain properties helps restoring the original covariance matrix. The covariance matrix estimate from applying L1 regularization to the entries of the matrix logarithm is compared to some other estimates in simulation studies and real data analysis.