EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Acceleration and New Analysis of Convex Optimization Algorithms

Download or read book Acceleration and New Analysis of Convex Optimization Algorithms written by Lewis Liu and published by . This book was released on 2021 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Recent years have witnessed a resurgence of the Frank-Wolfe (FW) algorithm, also known as conditional gradient methods, in sparse optimization and large-scale machine learning problems with smooth convex objectives. Compared to projected or proximal gradient methods, such projection-free method saves the computational cost of orthogonal projections onto the constraint set. Meanwhile, FW also gives solutions with sparse structure. Despite of these promising properties, FW does not enjoy the optimal convergence rates achieved by projection-based accelerated methods. On the other hand, FW algorithm is affine-covariant, and enjoys accelerated convergence rates when the constraint set is strongly convex. However, these results rely on norm-dependent assumptions, usually incurring non-affine invariant bounds, in contradiction with FW's affine-covariant property. In this work, we introduce new structural assumptions on the problem (such as the directional smoothness) and derive an affine in- variant, norm-independent analysis of Frank-Wolfe. Based on our analysis, we pro- pose an affine invariant backtracking line-search. Interestingly, we show that typical back-tracking line-search techniques using smoothness of the objective function surprisingly converge to an affine invariant stepsize, despite using affine-dependent norms in the computation of stepsizes. This indicates that we do not necessarily need to know the structure of sets in advance to enjoy the affine-invariant accelerated rate. Additionally, we provide a promising direction to accelerate FW over strongly convex sets using duality gap techniques and a new version of smoothness. In another line of research, we study algorithms beyond first-order methods. Quasi-Newton techniques approximate the Newton step by estimating the Hessian using the so-called secant equations. Some of these methods compute the Hessian using several secant equations but produce non-symmetric updates. Other quasi- Newton schemes, such as BFGS, enforce symmetry but cannot satisfy more than one secant equation. We propose a new type of quasi-Newton symmetric update using several secant equations in a least-squares sense. Our approach generalizes and unifies the design of quasi-Newton updates and satisfies provable robustness guarantees.

Book Lectures on Convex Optimization

Download or read book Lectures on Convex Optimization written by Yurii Nesterov and published by Springer. This book was released on 2018-11-19 with total page 603 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides a comprehensive, modern introduction to convex optimization, a field that is becoming increasingly important in applied mathematics, economics and finance, engineering, and computer science, notably in data science and machine learning. Written by a leading expert in the field, this book includes recent advances in the algorithmic theory of convex optimization, naturally complementing the existing literature. It contains a unified and rigorous presentation of the acceleration techniques for minimization schemes of first- and second-order. It provides readers with a full treatment of the smoothing technique, which has tremendously extended the abilities of gradient-type methods. Several powerful approaches in structural optimization, including optimization in relative scale and polynomial-time interior-point methods, are also discussed in detail. Researchers in theoretical optimization as well as professionals working on optimization problems will find this book very useful. It presents many successful examples of how to develop very fast specialized minimization algorithms. Based on the author’s lectures, it can naturally serve as the basis for introductory and advanced courses in convex optimization for students in engineering, economics, computer science and mathematics.

Book Convex Optimization Algorithms

Download or read book Convex Optimization Algorithms written by Dimitri Bertsekas and published by Athena Scientific. This book was released on 2015-02-01 with total page 576 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides a comprehensive and accessible presentation of algorithms for solving convex optimization problems. It relies on rigorous mathematical analysis, but also aims at an intuitive exposition that makes use of visualization where possible. This is facilitated by the extensive use of analytical and algorithmic concepts of duality, which by nature lend themselves to geometrical interpretation. The book places particular emphasis on modern developments, and their widespread applications in fields such as large-scale resource allocation problems, signal processing, and machine learning. The book is aimed at students, researchers, and practitioners, roughly at the first year graduate level. It is similar in style to the author's 2009"Convex Optimization Theory" book, but can be read independently. The latter book focuses on convexity theory and optimization duality, while the present book focuses on algorithmic issues. The two books share notation, and together cover the entire finite-dimensional convex optimization methodology. To facilitate readability, the statements of definitions and results of the "theory book" are reproduced without proofs in Appendix B.

Book Algorithms for Convex Optimization

Download or read book Algorithms for Convex Optimization written by Nisheeth K. Vishnoi and published by Cambridge University Press. This book was released on 2021-10-07 with total page 314 pages. Available in PDF, EPUB and Kindle. Book excerpt: In the last few years, Algorithms for Convex Optimization have revolutionized algorithm design, both for discrete and continuous optimization problems. For problems like maximum flow, maximum matching, and submodular function minimization, the fastest algorithms involve essential methods such as gradient descent, mirror descent, interior point methods, and ellipsoid methods. The goal of this self-contained book is to enable researchers and professionals in computer science, data science, and machine learning to gain an in-depth understanding of these algorithms. The text emphasizes how to derive key algorithms for convex optimization from first principles and how to establish precise running time bounds. This modern text explains the success of these algorithms in problems of discrete optimization, as well as how these methods have significantly pushed the state of the art of convex optimization itself.

Book Convex Analysis and Optimization

Download or read book Convex Analysis and Optimization written by Dimitri Bertsekas and published by Athena Scientific. This book was released on 2003-03-01 with total page 560 pages. Available in PDF, EPUB and Kindle. Book excerpt: A uniquely pedagogical, insightful, and rigorous treatment of the analytical/geometrical foundations of optimization. The book provides a comprehensive development of convexity theory, and its rich applications in optimization, including duality, minimax/saddle point theory, Lagrange multipliers, and Lagrangian relaxation/nondifferentiable optimization. It is an excellent supplement to several of our books: Convex Optimization Theory (Athena Scientific, 2009), Convex Optimization Algorithms (Athena Scientific, 2015), Nonlinear Programming (Athena Scientific, 2016), Network Optimization (Athena Scientific, 1998), and Introduction to Linear Optimization (Athena Scientific, 1997). Aside from a thorough account of convex analysis and optimization, the book aims to restructure the theory of the subject, by introducing several novel unifying lines of analysis, including: 1) A unified development of minimax theory and constrained optimization duality as special cases of duality between two simple geometrical problems. 2) A unified development of conditions for existence of solutions of convex optimization problems, conditions for the minimax equality to hold, and conditions for the absence of a duality gap in constrained optimization. 3) A unification of the major constraint qualifications allowing the use of Lagrange multipliers for nonconvex constrained optimization, using the notion of constraint pseudonormality and an enhanced form of the Fritz John necessary optimality conditions. Among its features the book: a) Develops rigorously and comprehensively the theory of convex sets and functions, in the classical tradition of Fenchel and Rockafellar b) Provides a geometric, highly visual treatment of convex and nonconvex optimization problems, including existence of solutions, optimality conditions, Lagrange multipliers, and duality c) Includes an insightful and comprehensive presentation of minimax theory and zero sum games, and its connection with duality d) Describes dual optimization, the associated computational methods, including the novel incremental subgradient methods, and applications in linear, quadratic, and integer programming e) Contains many examples, illustrations, and exercises with complete solutions (about 200 pages) posted at the publisher's web site http://www.athenasc.com/convexity.html

Book Acceleration Methods

Download or read book Acceleration Methods written by Alexandre D'Aspremont and published by . This book was released on 2021 with total page 254 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book is an introduction to Acceleration Methods used in convex optimization that enables the reader to quickly understand the important principles and apply the techniques to their own research.

Book Lectures on Modern Convex Optimization

Download or read book Lectures on Modern Convex Optimization written by Aharon Ben-Tal and published by SIAM. This book was released on 2001-01-01 with total page 504 pages. Available in PDF, EPUB and Kindle. Book excerpt: Here is a book devoted to well-structured and thus efficiently solvable convex optimization problems, with emphasis on conic quadratic and semidefinite programming. The authors present the basic theory underlying these problems as well as their numerous applications in engineering, including synthesis of filters, Lyapunov stability analysis, and structural design. The authors also discuss the complexity issues and provide an overview of the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming. The book's focus on well-structured convex problems in conic form allows for unified theoretical and algorithmical treatment of a wide spectrum of important optimization problems arising in applications.

Book Greed  Hedging  and Acceleration in Convex Optimization

Download or read book Greed Hedging and Acceleration in Convex Optimization written by Jason M. Altschuler and published by . This book was released on 2018 with total page 156 pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis revisits the well-studied and practically motivated problem of minimizing a strongly convex, smooth function with first-order information. The first main message of the thesis is that, surprisingly, algorithms which are individually suboptimal can be combined to achieve accelerated convergence rates. This phenomenon can be intuively understood as "hedging" between safe strategies (e.g. slowly converging algorithms) and aggressive strategies (e.g. divergent algorithms) since bad cases for the former are good cases for the latter, and vice versa. Concretely, we implement the optimal hedging by simply running Gradient Descent (GD) with prudently chosen stepsizes. This result goes against the conventional wisdom that acceleration is impossible without momentum. The second main message is a universality result for quadratic optimization. We show that, roughly speaking, "most" Krylov-subspace algorithms are asymptotically optimal (in the worst-case) and "most" quadratic functions are asymptotically worst-case functions (for all algorithms). From an algorithmic perspective, this goes against the conventional wisdom that accelerated algorithms require extremely careful parameter tuning. From a lower-bound perspective, this goes against the conventional wisdom that there are relatively few "worst functions in the world" and they have lots of structure. It also goes against the conventional wisdom that a quadratic function is easier to optimize when the initialization error is more concentrated on certain eigenspaces - counterintuitively, we show that so long as this concentration is not "pathologically" extreme, this only leads to faster convergence in the beginning iterations and is irrelevant asymptotically. Part I of the thesis shows the algorithmic side of this universality by leveraging tools from potential theory and harmonic analysis. The main result is a characterization of non-adaptive randomized Krylov-subspace algorithms which asymptotically achieve the so-called "accelerated rate" in the worst case. As a special case, this recovers the known fact that GD accelerates when inverse stepsizes are i.i.d. from the Arcsine distribution. This distribution has a remarkable "equalizing" property: every quadratic function is equally easy to optimize. We interpret this as "optimal hedging" since there is no worst-case function. Leveraging the equalizing property also provides other new insights including asymptotic isotropy of the iterates around the optimum, and uniform convergence guarantees for extending our analysis to l2. Part II of the thesis shows the lower-bound side of this universality by connecting quadratic optimization to the universality of orthogonal polynomials. We also characterize, for every finite number of iterations n, all worst-case quadratic functions for n iterations of any Krylov-subspace algorithm. Previously no tight constructions were known. (Note the classical construction of [Nemirovskii and Yudin, 1983] is only tight asymptotically.) As a corollary, this result also proves that randomness does not help Krylov-subspace algorithms. Combining the results in Parts I and II uncovers a duality between optimal Krylov-subspace algorithms and worst-case quadratic functions. It also shows new close connections between quadratic optimization, orthogonal polynomials, Gaussian quadrature, Jacobi operators, and their spectral measures. Part III of the thesis extends the algorithmic techniques in Part I to convex optimization. We first show that running the aforementioned random GD algorithm accelerates on separable convex functions. This is the first convergence rate that exactly matches the classical quadratic-optimization lower bound of [Nemirovskii and Yudin, 1983] on any class of convex functions richer than quadratics. This provides partial evidence suggesting that convex optimization might be no harder than quadratic optimization. However, these techniques (provably) do not extend to general convex functions. This is roughly because they do not require all observed data to be consistent with a single valid function - we call this "stitching." We turn to a semidefinite programming formulation of worst-case rate from [Taylor et al., 2017] that ensures stitching. Using this we compute the optimal GD stepsize schedules for 1, 2, and 3 iterations, and show that they partially accelerate on general convex functions. These optimal schedules for convex optimization are remarkably different from the optimal schedules for quadratic optimization. The rate improves as the number of iterations increases, but the algebraic systems become increasingly complicated to solve and the general case eludes us.

Book Convex Optimization

    Book Details:
  • Author : Sébastien Bubeck
  • Publisher : Foundations and Trends (R) in Machine Learning
  • Release : 2015-11-12
  • ISBN : 9781601988607
  • Pages : 142 pages

Download or read book Convex Optimization written by Sébastien Bubeck and published by Foundations and Trends (R) in Machine Learning. This book was released on 2015-11-12 with total page 142 pages. Available in PDF, EPUB and Kindle. Book excerpt: This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. It begins with the fundamental theory of black-box optimization and proceeds to guide the reader through recent advances in structural optimization and stochastic optimization. The presentation of black-box optimization, strongly influenced by the seminal book by Nesterov, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. Special attention is also given to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging), and discussing their relevance in machine learning. The text provides a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization it discusses stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. It also briefly touches upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.

Book Convex Analysis and Nonlinear Optimization

Download or read book Convex Analysis and Nonlinear Optimization written by Jonathan M. Borwein and published by Springer Science & Business Media. This book was released on 2013-06-29 with total page 281 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides a concise, accessible account of convex analysis and its applications and extensions, for a broad audience. It can serve as a teaching text, at roughly the level of first year graduate students, since the main body of the text is self-contained, with each section rounded off by an often extensive set of optional exercises. The new edition adds material on semismooth optimization, as well as several new proofs that will make this book even more self-contained.

Book Acceleration Methods

Download or read book Acceleration Methods written by Alexandre d'Aspremont and published by . This book was released on 2021-12-15 with total page 262 pages. Available in PDF, EPUB and Kindle. Book excerpt: This monograph covers recent advances in a range of acceleration techniques frequently used in convex optimization. Using quadratic optimization problems, the authors introduce two key families of methods, namely momentum and nested optimization schemes. These methods are covered in detail and include Chebyshev Acceleration, Nonlinear Acceleration, Nesterov Acceleration, Proximal Acceleration and Catalysts and Restart Schemes.This book provides the reader with an in-depth description of the developments in Acceleration Methods since the early 2000s, whilst referring the reader back to underpinning earlier work for further understanding. This topic is important in the modern-day application of convex optimization techniques in many applicable areas.This book is an introduction to the topic that enables the reader to quickly understand the important principles and apply the techniques to their own research.

Book Introductory Lectures on Convex Optimization

Download or read book Introductory Lectures on Convex Optimization written by Y. Nesterov and published by Springer Science & Business Media. This book was released on 2013-12-01 with total page 253 pages. Available in PDF, EPUB and Kindle. Book excerpt: It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].

Book Large Scale Convex Optimization

Download or read book Large Scale Convex Optimization written by Ernest K. Ryu and published by Cambridge University Press. This book was released on 2022-12-01 with total page 320 pages. Available in PDF, EPUB and Kindle. Book excerpt: Starting from where a first course in convex optimization leaves off, this text presents a unified analysis of first-order optimization methods – including parallel-distributed algorithms – through the abstraction of monotone operators. With the increased computational power and availability of big data over the past decade, applied disciplines have demanded that larger and larger optimization problems be solved. This text covers the first-order convex optimization methods that are uniquely effective at solving these large-scale optimization problems. Readers will have the opportunity to construct and analyze many well-known classical and modern algorithms using monotone operators, and walk away with a solid understanding of the diverse optimization algorithms. Graduate students and researchers in mathematical optimization, operations research, electrical engineering, statistics, and computer science will appreciate this concise introduction to the theory of convex optimization algorithms.

Book Convex Optimization

Download or read book Convex Optimization written by Arto Ruud and published by Nova Science Publishers. This book was released on 2019 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Over the past two decades, it has been recognized that advanced image processing techniques provide valuable information to physicians for the diagnosis, image guided therapy and surgery, and monitoring of human diseases. Convex Optimization: Theory, Methods and Applications introduces novel and sophisticated mathematical problems which encourage the development of advanced optimization and computing methods, especially convex optimization.The authors go on to study Steffensen-King-type methods of convergence to approximate a locally unique solution of a nonlinear equation and also in problems of convex optimization. Real-world applications are also provided.The following study is focused on the design and testing of a Matlab code of the Frank-Wolfe algorithm. The Nesterov step is proposed in order to accelerate the algorithm, and the results of some numerical experiments of constraint optimization are also provided.Lagrangian methods for numerical solutions to constrained convex programs are also explored. For enhanced algorithms, the traditional Lagrange multiplier update is modified to take a soft reflection across the zero boundary. This, coupled with a modified drift expression, is shown to yield improved performance.Next, Newton's mesh independence principle was used to solve a certain class of optimal design problems from earlier studies. Motivated by optimization considerations, the authors show that under the same computational cost, a finer mesh independence principle can be given than before.This compilation closes with a presentation on a local convergence analysis for eighth�order variants of Hansen�Patrick�s family for approximating a locally unique solution of a nonlinear equation. The radius of convergence and computable error bounds on the distances involved are also provided.

Book A General Framework of Large scale Convex Optimization Using Jensen Surrogates and Acceleration Techniques

Download or read book A General Framework of Large scale Convex Optimization Using Jensen Surrogates and Acceleration Techniques written by Soysal Degirmenci and published by . This book was released on 2016 with total page 247 pages. Available in PDF, EPUB and Kindle. Book excerpt: In a world where data rates are growing faster than computing power, algorithmic acceleration based on developments in mathematical optimization plays a crucial role in narrowing the gap between the two. As the scale of optimization problems in many fields is getting larger, we need faster optimization methods that not only work well in theory, but also work well in practice by exploiting underlying state-of-the-art computing technology. In this document, we introduce a unified framework of large-scale convex optimization using Jensen surrogates, an iterative optimization method that has been used in different fields since the 1970s. After this general treatment, we present non-asymptotic convergence analysis of this family of methods and the motivation behind developing accelerated variants. Moreover, we discuss widely used acceleration techniques for convex optimization and then investigate acceleration techniques that can be used within the Jensen surrogate framework while proposing several novel acceleration methods. Furthermore, we show that proposed methods perform competitively with or better than state-of-the-art algorithms for several applications including Sparse Linear Regression (Image Deblurring), Positron Emission Tomography, X-Ray Transmission Tomography, Logistic Regression, Sparse Logistic Regression and Automatic Relevance Determination for X-Ray Transmission Tomography.

Book Advances in Convex Analysis and Global Optimization

Download or read book Advances in Convex Analysis and Global Optimization written by Nicolas Hadjisavvas and published by Springer Science & Business Media. This book was released on 2013-12-01 with total page 601 pages. Available in PDF, EPUB and Kindle. Book excerpt: There has been much recent progress in global optimization algo rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by the General Secretariat of Research and Tech nology of Greece, by the Ministry of Education of Greece, and several local Greek government agencies and companies. This volume contains a selective collection of refereed papers based on invited and contribut ing talks presented at this conference. The two themes of convexity and global optimization pervade this book. The conference provided a forum for researchers working on different aspects of convexity and global opti mization to present their recent discoveries, and to interact with people working on complementary aspects of mathematical programming.

Book Convex Analysis and Minimization Algorithms I

Download or read book Convex Analysis and Minimization Algorithms I written by Jean-Baptiste Hiriart-Urruty and published by Springer Science & Business Media. This book was released on 2013-03-09 with total page 432 pages. Available in PDF, EPUB and Kindle. Book excerpt: Convex Analysis may be considered as a refinement of standard calculus, with equalities and approximations replaced by inequalities. As such, it can easily be integrated into a graduate study curriculum. Minimization algorithms, more specifically those adapted to non-differentiable functions, provide an immediate application of convex analysis to various fields related to optimization and operations research. These two topics making up the title of the book, reflect the two origins of the authors, who belong respectively to the academic world and to that of applications. Part I can be used as an introductory textbook (as a basis for courses, or for self-study); Part II continues this at a higher technical level and is addressed more to specialists, collecting results that so far have not appeared in books.