EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Nonconvex Optimization for Low rank Matrix Related Problems

Download or read book Nonconvex Optimization for Low rank Matrix Related Problems written by Zhenzhen Li and published by . This book was released on 2020 with total page 170 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Optimization on Low Rank Nonconvex Structures

Download or read book Optimization on Low Rank Nonconvex Structures written by Hiroshi Konno and published by Springer Science & Business Media. This book was released on 2013-12-01 with total page 462 pages. Available in PDF, EPUB and Kindle. Book excerpt: Global optimization is one of the fastest developing fields in mathematical optimization. In fact, an increasing number of remarkably efficient deterministic algorithms have been proposed in the last ten years for solving several classes of large scale specially structured problems encountered in such areas as chemical engineering, financial engineering, location and network optimization, production and inventory control, engineering design, computational geometry, and multi-objective and multi-level optimization. These new developments motivated the authors to write a new book devoted to global optimization problems with special structures. Most of these problems, though highly nonconvex, can be characterized by the property that they reduce to convex minimization problems when some of the variables are fixed. A number of recently developed algorithms have been proved surprisingly efficient for handling typical classes of problems exhibiting such structures, namely low rank nonconvex structures. Audience: The book will serve as a fundamental reference book for all those who are interested in mathematical optimization.

Book Optimality Guarantees for Non convex Low Rank Matrix Recovery Problems

Download or read book Optimality Guarantees for Non convex Low Rank Matrix Recovery Problems written by Christopher Dale White and published by . This book was released on 2015 with total page 196 pages. Available in PDF, EPUB and Kindle. Book excerpt: Low rank matrices lie at the heart of many techniques in scientific computing and machine learning. In this thesis, we examine various scenarios in which we seek to recover an underlying low rank matrix from compressed or noisy measurements. Specifically, we consider the recovery of a rank r positive semidefinite matrix XX[superscript T] [element] R[superscript n x n] from m scalar measurements of the form [mathematic equation] via minimization of the natural l2 loss function [mathematic equation]; we also analyze the quadratic nonnegative matrix factorization (QNMF) approach to clustering where the matrix to be factorized is the transition matrix for a reversible Markov chain. In all of these instances, the optimization problem we wish to solve has many local optima and is highly non-convex. Instead of analyzing convex relaxations, which tend to be complicated and computationally expensive, we operate directly on the natural non-convex problems and prove both local and global optimality guarantees for a family of algorithms.

Book Non convex Optimization for Machine Learning

Download or read book Non convex Optimization for Machine Learning written by Prateek Jain and published by Foundations and Trends in Machine Learning. This book was released on 2017-12-04 with total page 218 pages. Available in PDF, EPUB and Kindle. Book excerpt: Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equips the reader with the tools and techniques needed to apply and analyze simple but powerful procedures for non-convex problems. Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. The monograph initiates the discussion with entire chapters devoted to presenting a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. The monograph concludes with a look at four interesting applications in the areas of machine learning and signal processing, and exploring how the non-convex optimization techniques introduced earlier can be used to solve these problems. The monograph also contains, for each of the topics discussed, exercises and figures designed to engage the reader, as well as extensive bibliographic notes pointing towards classical works and recent advances. Non-convex Optimization for Machine Learning can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.

Book Non convex Optimization Methods for Sparse and Low rank Reconstruction

Download or read book Non convex Optimization Methods for Sparse and Low rank Reconstruction written by Penghang Yin and published by . This book was released on 2016 with total page 93 pages. Available in PDF, EPUB and Kindle. Book excerpt: An algorithmic framework, based on the difference of convex functions algorithm, is proposed for minimizing difference of ℓ1 and ℓ 2 norms (ℓ1-2 minimization) as well as a wide class of concave sparse metrics for compressed sensing problems. The resulting algorithm iterates a sequence of ℓ1 minimization problems. An exact sparse recovery theory is established to show that the proposed framework always improves on the basis pursuit (ℓ1 minimization) and inherits robustness from it. Numerical examples on success rates of sparse solution recovery illustrate further that, unlike most existing non-convex compressed sensing solvers in the literature, our method always out-performs basis pursuit, no matter how ill-conditioned the measurement matrix is.As the counterpart of ℓ1-2 minimization for low-rank matrix recovery, we present a phase retrieval method via minimization of the difference of trace and Frobenius norms which we call PhaseLiftOff. The associated least squares minimization with this penalty as regularization is equivalent to the original rank-one least squares problem under a mild condition on the measurement noise. Numerical results show that PhaseLiftOff outperforms the convex PhaseLift and its non-convex variant (log-determinant regularization), and successfully recovers signals near the theoretical lower limit on the number of measurements without the noise.

Book Handbook of Robust Low Rank and Sparse Matrix Decomposition

Download or read book Handbook of Robust Low Rank and Sparse Matrix Decomposition written by Thierry Bouwmans and published by CRC Press. This book was released on 2016-05-27 with total page 553 pages. Available in PDF, EPUB and Kindle. Book excerpt: Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.

Book Optimization Algorithms on Matrix Manifolds

Download or read book Optimization Algorithms on Matrix Manifolds written by P.-A. Absil and published by Princeton University Press. This book was released on 2009-04-11 with total page 240 pages. Available in PDF, EPUB and Kindle. Book excerpt: Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists.

Book Handbook of Variational Methods for Nonlinear Geometric Data

Download or read book Handbook of Variational Methods for Nonlinear Geometric Data written by Philipp Grohs and published by Springer Nature. This book was released on 2020-04-03 with total page 701 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book covers different, current research directions in the context of variational methods for non-linear geometric data. Each chapter is authored by leading experts in the respective discipline and provides an introduction, an overview and a description of the current state of the art. Non-linear geometric data arises in various applications in science and engineering. Examples of nonlinear data spaces are diverse and include, for instance, nonlinear spaces of matrices, spaces of curves, shapes as well as manifolds of probability measures. Applications can be found in biology, medicine, product engineering, geography and computer vision for instance. Variational methods on the other hand have evolved to being amongst the most powerful tools for applied mathematics. They involve techniques from various branches of mathematics such as statistics, modeling, optimization, numerical mathematics and analysis. The vast majority of research on variational methods, however, is focused on data in linear spaces. Variational methods for non-linear data is currently an emerging research topic. As a result, and since such methods involve various branches of mathematics, there is a plethora of different, recent approaches dealing with different aspects of variational methods for nonlinear geometric data. Research results are rather scattered and appear in journals of different mathematical communities. The main purpose of the book is to account for that by providing, for the first time, a comprehensive collection of different research directions and existing approaches in this context. It is organized in a way that leading researchers from the different fields provide an introductory overview of recent research directions in their respective discipline. As such, the book is a unique reference work for both newcomers in the field of variational methods for non-linear geometric data, as well as for established experts that aim at to exploit new research directions or collaborations. Chapter 9 of this book is available open access under a CC BY 4.0 license at link.springer.com.

Book Statistical Inference and Optimization for Low rank Matrix and Tensor Learning

Download or read book Statistical Inference and Optimization for Low rank Matrix and Tensor Learning written by Yuetian Luo (Ph.D.) and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: High dimensional statistical problems with matrix or tensor type data are ubiquitous in modern data analysis. In many applications, the dimension of the matrix or tensor is high and much bigger than the sample size and some structural assumptions are often imposed to ensure the problem is well-posed. One of the most popular structures in matrix and tensor data analysis is the low-rankness. In this thesis, we make contributions to the statistical inference and optimization in low-rank matrix and tensor data analysis from the following three aspects. First, first-order algorithms have been the workhorse in modern data analysis, including matrix and tensor problems, for their simplicity and efficiency. Second-order algorithms suffer from high computational costs and instability. The first part of the thesis explores the following question: can we develop provable efficient second-order algorithms for high-dimensional matrix and tensor problems with low-rank structures? We provide a positive answer to this question, where the key idea is to explore smooth Riemannian structures of the sets of low-rank matrices and tensors and the connection to the second-order Riemannian optimization methods. In particular, we demonstrate that for a large class of tensor-on-tensor regression problems, the Riemannian Gauss-Newton algorithm is computationally fast and achieves provable second-order convergence. We also discuss the case when the intrinsic rank of the parameter matrix/tensor is unknown and a natural rank overspecification is implemented. In the second part of the thesis, we explore an interesting question: is there any connection between different non-convex optimization approaches for solving the general low-rank matrix optimization? We find from a geometric point of view, the common non-convex factorization formulation has a close connection with the Riemannian formulation and there exists an equivalence between them. Moreover, we discover that two notable Riemannian formulations, i.e., formulations under Riemannian embedded and quotient geometries, are also closely related from a geometric point of view. In the final part of the thesis, we are dedicated to studying one intriguing phenomenon in high dimensional statistical problems, statistical and computational trade-offs, which refers to the commonly appearing gaps between the different signal-to-noise ratio thresholds that make the problem information-theoretically solvable or polynomial-time solvable. Here we focus on the statistical-computational trade-offs induced by tensor structures. We would provide rigorous evidence for the computational barriers to two important classes of problems: tensor clustering and tensor regression. We show these computational limits by the average-case reduction and restricted class of low-degree polynomials arguments.

Book Provable Non convex Optimization for Learning Parametric Models

Download or read book Provable Non convex Optimization for Learning Parametric Models written by Kai Zhong (Ph. D.) and published by . This book was released on 2018 with total page 866 pages. Available in PDF, EPUB and Kindle. Book excerpt: Non-convex optimization plays an important role in recent advances of machine learning. A large number of machine learning tasks are performed by solving a non-convex optimization problem, which is generally NP-hard. Heuristics, such as stochastic gradient descent, are employed to solve non-convex problems and work decently well in practice despite the lack of general theoretical guarantees. In this thesis, we study a series of non-convex optimization strategies and prove that they lead to the global optimal solution for several machine learning problems, including mixed linear regression, one-hidden-layer (convolutional) neural networks, non-linear inductive matrix completion, and low-rank matrix sensing. At a high level, we show that the non-convex objectives formulated in the above problems have a large basin of attraction around the global optima when the data has benign statistical properties. Therefore, local search heuristics, such as gradient descent or alternating minimization, are guaranteed to converge to the global optima if initialized properly. Furthermore, we show that spectral methods can efficiently initialize the parameters such that they fall into the basin of attraction. Experiments on synthetic datasets and real applications are carried out to justify our theoretical analyses and illustrate the superiority of our proposed methods.

Book Nonconvex Optimization and Model Representation with Applications in Control Theory and Machine Learning

Download or read book Nonconvex Optimization and Model Representation with Applications in Control Theory and Machine Learning written by Yue Sun and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: In control and machine learning, the primary goal is to learn the models that make predictions or decisions and act in the world. This thesis covers two important aspects for control theory and machine learning: the model structure that allows low training and generalization error with few samples (i.e., low sample complexity), and convergence guarantees for first-order optimization algorithms for nonconvex optimization. If the model and the training algorithm apply the knowledge of the structure of data (such as sparsity, low-rankness, etc.), the model can be learned with low sample complexity. We present two results, the Hankel nuclear norm regularization method for learning a low order system, and the overparameterized representation for linear meta-learning. We study dynamical system identification in the first result. We assume the true system order is low. A low system order means that the state can be represented by a low dimensional vector, and the system corresponds to a low rank Hankel matrix. The low-rankness is known to be encouraged by nuclear norm regularized estimator in matrix completion theory. We apply a nuclear norm regularized estimator for Hankel matrix, and show that it requires fewer samples than the ordinary least squares estimator. We study linear meta-learning in the second part. The meta-learning algorithm contains two steps: learning a large model in representation learning stage, and fine tuning the model in few-shot learning stage. The few-shot dataset contains few samples, and to avoid overfitting, we need a fine-tuning algorithm that uses the information from representation learning. We generalize the subspace-based model in prior arts to Gaussian model, and describe the overparameterized meta-learning procedure. We show that the feature-task alignment reduces the sample complexity in representation learning, and the optimal task representation is overparameterized. First order optimization methods such as gradient based method, is widely used in machine learning thanks to its simplicity for implementation and fast convergence. However, the objective function in machine learning can be nonconvex, and the first order method has only the theoretical guarantee that it converges to a stationary point, rather than a local/global minimum. We dive into more refined analysis of the convergence guarantee, and present two results, the convergence of perturbed gradient descent approach to a local minimum on Riemannian manifold, and a unified global convergence result of policy gradient descent for linear system control problems. We study how Riemannian gradient converges to an approximate local minimum in the first part. While it is well-known that the perturbed gradient descent escapes saddle points in Euclidean space, less is known about the concrete convergence rate when we apply Riemannian gradient descent on the manifold. In the first result, we show that the perturbed Riemannian gradient descent converges to an approximate local minimum and reveal the relation between convergence rate and the manifold curvature. We study the policy gradient descent applied in control in the second part. Many control problems are revisited under the context of the recent boom in reinforcement learning (RL), however, there is a gap between the RL and control methodology: The policy gradient in RL applies first-order method on nonconvex landscape, and it is hard to show they converge to global minimum, while control theory invents reparameterization that makes the problem convex and they are proven to find the globally optimal controller in polynomial time. Targeting on interpreting the success of the nonconvex method, in the second result, we connect the nonconvex policy gradient descent applied for a collection of control problems with their convex parameterization, and propose a unified proof for the global convergence of policy gradient descent.

Book Nonconvex Matrix Completion

Download or read book Nonconvex Matrix Completion written by Ji Chen and published by . This book was released on 2020 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Techniques of matrix completion aim to impute a large portion of missing entries in a data matrix through a small portion of observed ones, with broad machine learning applications including collaborative filtering, system identification, global positioning, etc. This dissertation aims to analyze the nonconvex matrix problem from geometric and algorithmic perspectives. The first part of the dissertation, i.e., Chapter 2 and 3, focuses on analyzing the nonconvex matrix completion problem from the geometric perspective. Geometric analysis has been conducted on various low-rank recovery problems including phase retrieval, matrix factorization and matrix completion in recent few years. Taking matrix completion as an example, with assumptions on the underlying matrix and the sampling rate, all the local minima of the nonconvex objective function were shown to be global minima, i.e., nonconvex optimization can recover the underlying matrix exactly. In Chapter 2, we propose a model-free framework for nonconvex matrix completion: We characterize how well local-minimum based low-rank factorization approximates the underlying matrix without any assumption on it. As an implication, a corollary of our main theorem improves the state-of-the-art sampling rate required for nonconvex matrix completion to rule out spurious local minima. In practice, additional structures are usually employed in order to improve the accuracy of matrix completion. Examples include subspace constraints formed by side information in collaborative filtering, and skew symmetry in pairwise ranking. Chapter 3 performs a unified geometric analysis of nonconvex matrix completion with linearly parameterized factorization, which covers the aforementioned examples as special cases. Uniform upper bounds for estimation errors are established for all local minima, provided assumptions on the sampling rate and the underlying matrix are satisfied. The second part of the dissertation (Chapter 4) focuses on algorithmic analysis of nonconvex matrix completion. Row-wise projection/regularization has become a widely adapted assumption due to its convenience for analysis, though it was observed to be unnecessary in numerical simulations. Recently the gap between theory and practice has been overcome for positive semidefinite matrix completion via so called leave-one-out analysis. In Chapter 4, we extend the leave-one-out analysis to the rectangular case, and more significantly, improve the required sampling rate for convergence guarantee.

Book High Dimensional Optimization and Probability

Download or read book High Dimensional Optimization and Probability written by Ashkan Nikeghbali and published by Springer Nature. This book was released on 2022-08-04 with total page 417 pages. Available in PDF, EPUB and Kindle. Book excerpt: This volume presents extensive research devoted to a broad spectrum of mathematics with emphasis on interdisciplinary aspects of Optimization and Probability. Chapters also emphasize applications to Data Science, a timely field with a high impact in our modern society. The discussion presents modern, state-of-the-art, research results and advances in areas including non-convex optimization, decentralized distributed convex optimization, topics on surrogate-based reduced dimension global optimization in process systems engineering, the projection of a point onto a convex set, optimal sampling for learning sparse approximations in high dimensions, the split feasibility problem, higher order embeddings, codifferentials and quasidifferentials of the expectation of nonsmooth random integrands, adjoint circuit chains associated with a random walk, analysis of the trade-off between sample size and precision in truncated ordinary least squares, spatial deep learning, efficient location-based tracking for IoT devices using compressive sensing and machine learning techniques, and nonsmooth mathematical programs with vanishing constraints in Banach spaces. The book is a valuable source for graduate students as well as researchers working on Optimization, Probability and their various interconnections with a variety of other areas. Chapter 12 is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Book Low Rank Semidefinite Programming

Download or read book Low Rank Semidefinite Programming written by Alex Lemon and published by Now Publishers. This book was released on 2016-05-04 with total page 180 pages. Available in PDF, EPUB and Kindle. Book excerpt: Finding low-rank solutions of semidefinite programs is important in many applications. For example, semidefinite programs that arise as relaxations of polynomial optimization problems are exact relaxations when the semidefinite program has a rank-1 solution. Unfortunately, computing a minimum-rank solution of a semidefinite program is an NP-hard problem. This monograph reviews the theory of low-rank semidefinite programming, presenting theorems that guarantee the existence of a low-rank solution, heuristics for computing low-rank solutions, and algorithms for finding low-rank approximate solutions. It then presents applications of the theory to trust-region problems and signal processing.

Book Efficient Non convex Algorithms for Large scale Learning Problems

Download or read book Efficient Non convex Algorithms for Large scale Learning Problems written by Dohyung Park and published by . This book was released on 2016 with total page 370 pages. Available in PDF, EPUB and Kindle. Book excerpt: The emergence of modern large-scale datasets has led to a huge interest in the problem of learning hidden complex structures. Not only can models from such structures fit the datasets, they also have good generalization performance in the regime where the number of samples are limited compared to the dimensionality. However, one of the main issues is finding computationally efficient algorithms to learn the models. While convex relaxation provides polynomial-time algorithms with strong theoretical guarantees, there are demands for even faster algorithms with competitive performances, due to the large volume of the practical datasets. In this dissertation, we consider three types of algorithms, greedy methods, alternating minimization, and non-convex gradient descent, that have been key non-convex approaches to tackle the large-scale learning problems. For each theme, we focus on a specific problem and design an algorithm based on the designing ideas. We begin with the problem of subspace clustering, where one needs to learn underlying unions of subspaces from a set of data points around the subspaces. We develop two greedy algorithms that can perfectly cluster the points and recover the subspaces. The next problem of interest is collaborative ranking, where underlying low-rank preference matrices are to be learned from pairwise comparisons of the entries. We present an alternating minimization based algorithm. Finally, we develop a non-convex gradient descent algorithm for general low-rank matrix optimization problems. All of these algorithms exhibit low computational complexities as well as competitive statistical performances, which make them scalable and suitable for a variety of practical applications of the problems. Analysis of the algorithms provides theoretical guarantees of their performances.

Book Generalized Low Rank Models

Download or read book Generalized Low Rank Models written by Madeleine Udell and published by . This book was released on 2015 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Principal components analysis (PCA) is a well-known technique for approximating a tabular data set by a low rank matrix. This dissertation extends the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features. We propose several parallel algorithms for fitting generalized low rank models, and describe implementations and numerical results.

Book Evaluation Complexity of Algorithms for Nonconvex Optimization

Download or read book Evaluation Complexity of Algorithms for Nonconvex Optimization written by Coralia Cartis and published by SIAM. This book was released on 2022-07-06 with total page 549 pages. Available in PDF, EPUB and Kindle. Book excerpt: A popular way to assess the “effort” needed to solve a problem is to count how many evaluations of the problem functions (and their derivatives) are required. In many cases, this is often the dominating computational cost. Given an optimization problem satisfying reasonable assumptions—and given access to problem-function values and derivatives of various degrees—how many evaluations might be required to approximately solve the problem? Evaluation Complexity of Algorithms for Nonconvex Optimization: Theory, Computation, and Perspectives addresses this question for nonconvex optimization problems, those that may have local minimizers and appear most often in practice. This is the first book on complexity to cover topics such as composite and constrained optimization, derivative-free optimization, subproblem solution, and optimal (lower and sharpness) bounds for nonconvex problems. It is also the first to address the disadvantages of traditional optimality measures and propose useful surrogates leading to algorithms that compute approximate high-order critical points, and to compare traditional and new methods, highlighting the advantages of the latter from a complexity point of view. This is the go-to book for those interested in solving nonconvex optimization problems. It is suitable for advanced undergraduate and graduate students in courses on advanced numerical analysis, data science, numerical optimization, and approximation theory.