EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Towards the Understanding of Sample Efficient Reinforcement Learning Algorithms

Download or read book Towards the Understanding of Sample Efficient Reinforcement Learning Algorithms written by Tengyu Xu and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning (RL), which aims at designing a suitable policy for an agent via interacting with an unknown environment, has achieved remarkable success in the recent past. Despite its great potential to solve complex tasks, current RL algorithms suffer from requiring a large amount of interaction data, which could result in significant cost in real world applications. Thus, the goal of this thesis is to study the sample complexity of fundamental RL algorithms, and then, to propose new RL algorithms to solve real-world problems with provable efficiency. To achieve this goal, this thesis makes the contributions along the following three main directions: 1. For policy evaluation, we proposed a new on-policy algorithm called variance reduce TD (VRTD) and established the state-of-the-art sample complexity result for off-policy two-timescale TD learning algorithms. 2. For policy optimization, we established improved sample complexity bounds for on-policy actor-critic (AC) type algorithms and proposed the first doubly robust off-policy AC algorithm with provable efficiency guarantee. 3. We proposed three new algorithms: GenTD, CRPO and PARTED to address challenging practical problems of general value function evaluation, safe RL and trajectory-wise reward RL, respectively, with provable efficiency.

Book TEXPLORE  Temporal Difference Reinforcement Learning for Robots and Time Constrained Domains

Download or read book TEXPLORE Temporal Difference Reinforcement Learning for Robots and Time Constrained Domains written by Todd Hester and published by Springer. This book was released on 2013-06-22 with total page 170 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents and develops new reinforcement learning methods that enable fast and robust learning on robots in real-time. Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes and could solve the problems of learning and adaptation on robots. This book identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuous state features; 3) it must handle sensor and/or actuator delays; and 4) it should continually select actions in real time. This book focuses on addressing all four of these challenges. In particular, this book is focused on time-constrained domains where the first challenge is critically important. In these domains, the agent’s lifetime is not long enough for it to explore the domains thoroughly, and it must learn in very few samples.

Book Algorithms for Reinforcement Learning

Download or read book Algorithms for Reinforcement Learning written by Csaba Szepesvari and published by Morgan & Claypool Publishers. This book was released on 2010-08-08 with total page 103 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms' merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations. Table of Contents: Markov Decision Processes / Value Prediction Problems / Control / For Further Exploration

Book Understanding Machine Learning

Download or read book Understanding Machine Learning written by Shai Shalev-Shwartz and published by Cambridge University Press. This book was released on 2014-05-19 with total page 415 pages. Available in PDF, EPUB and Kindle. Book excerpt: Introduces machine learning and its algorithmic paradigms, explaining the principles behind automated learning approaches and the considerations underlying their usage.

Book Sample efficient Control with Directed Exploration in Discounted MDPs Under Linear Function Approximation

Download or read book Sample efficient Control with Directed Exploration in Discounted MDPs Under Linear Function Approximation written by Raksha Kumar Kumaraswamy and published by . This book was released on 2021 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: An important goal of online reinforcement learning algorithms is efficient data collection to learn near-optimal behaviour, that is, optimizing the exploration-exploitation trade-off to reduce the sample-complexity of learning. To improve sample-complexity of learning it is essential that the agent directs its exploratory behaviour towards either visiting unvisited parts of the environment, or reducing uncertainty it may have with respect to the visited parts. In addition to such directed exploration, sample-complexity of learning can be improved by using a representation space that is amenable to online reinforcement learning. This thesis presents several algorithms that focus on these avenues for improving the sample-complexity of online reinforcement learning, specifically in the setting of discounted MDPs under linear function approximation. A key challenge to direct effective online exploration is the learning of reliable uncertainty estimates. We address this by deriving high-probability confidence-bounds for value uncertainty estimation. We use these derived confidence-bounds to design two algorithms that direct effective online exploration; they differ mainly in their approach to directing ex- ploration for visiting unknown regions of the environment. In the first algorithm we propose a heuristic to do so, whereas the second algorithm uses a more principled strategy based on optimistic initialization. The second algorithm is also a planning-compatible algorithm that can be parallelized, scaling sample-efficiency benefits with the compute resources afforded to the algorithm. To improve sample-efficiency by utilizing representations that are amenable to online reinforcement learning, the thesis proposes a simple strategy for learning such representations offline. The representation learning algorithm encodes a property we call locality. Locality reduces interference in learning targets used by online reinforcement learning algorithms, consequently improving its sample-efficiency. The thesis shows that these learned representations also aid effective online exploration. Overall, this thesis proposes algorithms for improving sample-efficiency of online reinforcement learning, motivates their utility, and evaluates their benefits empirically.

Book Reinforcement Learning  second edition

Download or read book Reinforcement Learning second edition written by Richard S. Sutton and published by MIT Press. This book was released on 2018-11-13 with total page 549 pages. Available in PDF, EPUB and Kindle. Book excerpt: The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.

Book Efficient Reinforcement Learning Through Uncertainties

Download or read book Efficient Reinforcement Learning Through Uncertainties written by Dongruo Zhou and published by . This book was released on 2023 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: This dissertation is centered around the concept of uncertainty-aware reinforcement learning (RL), which seeks to enhance the efficiency of RL by incorporating uncertainty. RL is a vital mathematical framework in the field of artificial intelligence (AI) for creating autonomous agents that can learn optimal behaviors through interaction with their environments. However, RL is often criticized for being sample inefficient and computationally demanding. To tackle these challenges, the primary goals of this dissertation are twofold: to offer theoretical understanding of uncertainty-aware RL and to develop practical algorithms that utilize uncertainty to enhance the efficiency of RL. Our first objective is to develop an RL approach that is efficient in terms of sample usage for Markov Decision Processes (MDPs) with large state and action spaces. We present an uncertainty-aware RL algorithm that incorporates function approximation. We provide theoretical proof that this algorithm achieves near minimax optimal statistical complexity when learning the optimal policy. In our second objective, we address two specific scenarios: the batch learning setting and the rare policy switch setting. For both settings, we propose uncertainty-aware RL algorithms with limited adaptivity. These algorithms significantly reduce the number of policy switches compared to previous baseline algorithms while maintaining a similar level of statistical complexity. Lastly, we focus on estimating uncertainties in neural network-based estimation models. We introduce a gradient-based method that effectively computes these uncertainties. Our approach is computationally efficient, and the resulting uncertainty estimates are both valid and reliable. The methods and techniques presented in this dissertation contribute to the advancement of our understanding regarding the fundamental limits of RL. These research findings pave the way for further exploration and development in the field of decision-making algorithm design.

Book Reinforcement Learning

Download or read book Reinforcement Learning written by Marco Wiering and published by Springer Science & Business Media. This book was released on 2012-03-05 with total page 653 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning encompasses both a science of adaptive behavior of rational beings in uncertain environments and a computational methodology for finding optimal behaviors for challenging problems in control, optimization and adaptive behavior of intelligent agents. As a field, reinforcement learning has progressed tremendously in the past decade. The main goal of this book is to present an up-to-date series of survey articles on the main contemporary sub-fields of reinforcement learning. This includes surveys on partially observable environments, hierarchical task decompositions, relational knowledge representation and predictive state representations. Furthermore, topics such as transfer, evolutionary methods and continuous spaces in reinforcement learning are surveyed. In addition, several chapters review reinforcement learning methods in robotics, in games, and in computational neuroscience. In total seventeen different subfields are presented by mostly young experts in those areas, and together they truly represent a state-of-the-art of current reinforcement learning research. Marco Wiering works at the artificial intelligence department of the University of Groningen in the Netherlands. He has published extensively on various reinforcement learning topics. Martijn van Otterlo works in the cognitive artificial intelligence group at the Radboud University Nijmegen in The Netherlands. He has mainly focused on expressive knowledge representation in reinforcement learning settings.

Book Reinforcement Learning Algorithms with Python

Download or read book Reinforcement Learning Algorithms with Python written by Andrea Lonza and published by Packt Publishing Ltd. This book was released on 2019-10-18 with total page 356 pages. Available in PDF, EPUB and Kindle. Book excerpt: Develop self-learning algorithms and agents using TensorFlow and other Python tools, frameworks, and libraries Key FeaturesLearn, develop, and deploy advanced reinforcement learning algorithms to solve a variety of tasksUnderstand and develop model-free and model-based algorithms for building self-learning agentsWork with advanced Reinforcement Learning concepts and algorithms such as imitation learning and evolution strategiesBook Description Reinforcement Learning (RL) is a popular and promising branch of AI that involves making smarter models and agents that can automatically determine ideal behavior based on changing requirements. This book will help you master RL algorithms and understand their implementation as you build self-learning agents. Starting with an introduction to the tools, libraries, and setup needed to work in the RL environment, this book covers the building blocks of RL and delves into value-based methods, such as the application of Q-learning and SARSA algorithms. You'll learn how to use a combination of Q-learning and neural networks to solve complex problems. Furthermore, you'll study the policy gradient methods, TRPO, and PPO, to improve performance and stability, before moving on to the DDPG and TD3 deterministic algorithms. This book also covers how imitation learning techniques work and how Dagger can teach an agent to drive. You'll discover evolutionary strategies and black-box optimization techniques, and see how they can improve RL algorithms. Finally, you'll get to grips with exploration approaches, such as UCB and UCB1, and develop a meta-algorithm called ESBAS. By the end of the book, you'll have worked with key RL algorithms to overcome challenges in real-world applications, and be part of the RL research community. What you will learnDevelop an agent to play CartPole using the OpenAI Gym interfaceDiscover the model-based reinforcement learning paradigmSolve the Frozen Lake problem with dynamic programmingExplore Q-learning and SARSA with a view to playing a taxi gameApply Deep Q-Networks (DQNs) to Atari games using GymStudy policy gradient algorithms, including Actor-Critic and REINFORCEUnderstand and apply PPO and TRPO in continuous locomotion environmentsGet to grips with evolution strategies for solving the lunar lander problemWho this book is for If you are an AI researcher, deep learning user, or anyone who wants to learn reinforcement learning from scratch, this book is for you. You’ll also find this reinforcement learning book useful if you want to learn about the advancements in the field. Working knowledge of Python is necessary.

Book Algorithms for Reinforcement Learning

Download or read book Algorithms for Reinforcement Learning written by Csaba Grossi and published by Springer Nature. This book was released on 2022-05-31 with total page 89 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms' merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations. Table of Contents: Markov Decision Processes / Value Prediction Problems / Control / For Further Exploration

Book Model Based Reinforcement Learning

Download or read book Model Based Reinforcement Learning written by Milad Farsi and published by John Wiley & Sons. This book was released on 2022-12-02 with total page 276 pages. Available in PDF, EPUB and Kindle. Book excerpt: Model-Based Reinforcement Learning Explore a comprehensive and practical approach to reinforcement learning Reinforcement learning is an essential paradigm of machine learning, wherein an intelligent agent performs actions that ensure optimal behavior from devices. While this paradigm of machine learning has gained tremendous success and popularity in recent years, previous scholarship has focused either on theory—optimal control and dynamic programming – or on algorithms—most of which are simulation-based. Model-Based Reinforcement Learning provides a model-based framework to bridge these two aspects, thereby creating a holistic treatment of the topic of model-based online learning control. In doing so, the authors seek to develop a model-based framework for data-driven control that bridges the topics of systems identification from data, model-based reinforcement learning, and optimal control, as well as the applications of each. This new technique for assessing classical results will allow for a more efficient reinforcement learning system. At its heart, this book is focused on providing an end-to-end framework—from design to application—of a more tractable model-based reinforcement learning technique. Model-Based Reinforcement Learning readers will also find: A useful textbook to use in graduate courses on data-driven and learning-based control that emphasizes modeling and control of dynamical systems from data Detailed comparisons of the impact of different techniques, such as basic linear quadratic controller, learning-based model predictive control, model-free reinforcement learning, and structured online learning Applications and case studies on ground vehicles with nonholonomic dynamics and another on quadrator helicopters An online, Python-based toolbox that accompanies the contents covered in the book, as well as the necessary code and data Model-Based Reinforcement Learning is a useful reference for senior undergraduate students, graduate students, research assistants, professors, process control engineers, and roboticists.

Book Efficient Reinforcement Learning with Value Function Generalization

Download or read book Efficient Reinforcement Learning with Value Function Generalization written by Zheng Wen and published by . This book was released on 2014 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning (RL) is concerned with how an agent should learn to make decisions over time while interacting with an environment. A growing body of work has produced RL algorithms with sample and computational efficiency guarantees. However, most of this work focuses on "tabula rasa" learning; i.e. algorithms aim to learn with little or no prior knowledge about the environment. Such algorithms exhibit sample complexities that grow at least linearly in the number of states, and they are of limited practical import since state spaces in most relevant contexts are enormous. There is a need for algorithms that generalize in order to learn how to make effective decisions at states beyond the scope of past experience. This dissertation focuses on the open issue of developing efficient RL algorithms that leverage value function generalization (VFG). It consists of two parts. In the first part, we present sample complexity results for two classes of RL problems -- deterministic systems with general forms of VFG and Markov decision processes (MDPs) with a finite hypothesis class. The results provide upper bounds that are independent of state and action space cardinalities and polynomial in other problem parameters. In the second part, building on insights from our sample complexity analyses, we propose randomized least-square value iteration (RLSVI), a RL algorithm for MDPs with VFG via linear hypothesis classes. The algorithm is based on a new notion of randomized value function exploration. We compare through computational studies the performance of RLSVI against least-square value iterations (LSVI) with Boltzmann exploration or epsilon-greedy exploration, which are widely used in RL with VFG. Results demonstrate that RLSVI is orders of magnitude more efficient.

Book Data Efficient Reinforcement Learning

Download or read book Data Efficient Reinforcement Learning written by Zhi Xu (Ph. D.) and published by . This book was released on 2021 with total page 176 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning (RL) has recently emerged as a generic yet powerful solution for learning complex decision-making policies, providing the key foundational underpinnings of recent successes in various domains, such as game playing and robotics. However, many state-of-the-art algorithms are data-hungry and computationally expensive, requiring large amounts of data to succeed. While this is possible for certain scenarios, in applications arising in social sciences and healthcare for example, where available data is sparse, this naturally can be costly or infeasible. With the surging interest in applying RL to broader domains, it is imperative to develop an informed view about the usage of data involved in its algorithmic design.

Book Stable Deep Reinforcement Learning

Download or read book Stable Deep Reinforcement Learning written by Jan Wülfing and published by . This book was released on 2020 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Abstract: Reinforcement Learning is no new discipline in the realm of machine learning, but has seen a surge in popularity and interest from researchers in the last years. Driven by the impact of Deep Learning and impressive success stories such as learning to play Atari on human level or solving the game of Go, one family of Reinforcement Learning methods is in the forefront of said trend: Deep Reinforcement Learning. This term usually refers to the combination of two powerful machine learning methods, namely Q-learning and (possibly deep) artificial neural networks resulting in the popular DQN and NFQ algorithms. Without wanting to belittle the power of said combination, for practitioners there are still many open questions and problems when applying these to a learning task. In this thesis we will focus mainly on two properties of Deep Reinforcement Learning that are especially important when dealing with real world applications, namely the stability and sample efficiency of Deep Reinforcement Learning training procedures. First, we will show by example that Deep Reinforcement Learning can suffer from unstable learning dynamics and propose an algorithm that improves stability as well as sample efficiency on several benchmarks. Second, we will introduce a novel application of Reinforcement Learning to a biological system, namely Biological Neural Networks and we will show that it is possible to learn to control certain activity features of these networks. This application will underline the importance of having stable and sample efficient reinforcement learning procedures

Book Master Reinforcement Learning

Download or read book Master Reinforcement Learning written by Evan Walters and published by Independently Published. This book was released on 2024-03-19 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book is a comprehensive guide to reinforcement learning (RL), covering both the theoretical foundations and practical applications. It starts with introducing the core concepts of RL, including Markov decision processes, policy gradients, and value function learning. Then, it dives deeper into various RL algorithms, such as Q-learning, policy gradients (REINFORCE, PPO), and Actor-Critic methods (A2C, DDPG). A significant focus is placed on the challenges of deploying RL agents in the real world, including the reality gap, safety considerations, and evaluation metrics. The book also explores ethical considerations surrounding RL, stressing the importance of fairness, transparency, and responsible development. Here are the key highlights of the book: Clear explanations: Complex concepts are presented in a clear and understandable manner, making it accessible to readers with a basic understanding of machine learning. Balance between theory and practice: The book provides a solid theoretical foundation while also offering practical guidance for implementing RL algorithms and deploying RL agents. Coverage of advanced topics: It explores recent advancements in RL, such as sample-efficient RL, lifelong learning for RL agents, and the intersection of RL with other AI fields like robotics and natural language processing. Emphasis on responsible development: The book highlights the ethical considerations of RL and emphasizes the importance of developing and deploying RL agents in a safe and responsible manner. Overall, this book is an excellent resource for anyone interested in learning about reinforcement learning, from beginners to experienced practitioners. It provides a roadmap for continuous learning and innovation in this rapidly evolving field.

Book Efficient Reinforcement Learning Via Singular Value Decomposition  End to end Model based Methods and Reward Shaping

Download or read book Efficient Reinforcement Learning Via Singular Value Decomposition End to end Model based Methods and Reward Shaping written by Clement Gehring and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reinforcement learning (RL) provides a general framework for data-driven decision making. However, the very same generality that makes this approach applicable to a wide range of problems is also responsible for its well-known inefficiencies. In this thesis, we consider different properties which are shared by interesting classes of decision making which can be leveraged to design learning algorithms that are both computationally and data efficient. Specifically, this work examines the low-rank structure found in various aspects of decision making problems and the sparsity of effects of classical deterministic planning, as well as the properties that end-to-end model-based methods depend on to perform well. We start by showing how low-rank structure in the successor representation enables the design of an efficient on-line learning algorithm. Similarly, we show how this same structure can be found in the Bellman operator which we use to formulate an efficient variant of the least-squares temporal difference learning algorithm. We further explore low-rank structure in state features to learn efficient transition models which allow for efficient planning entirely in a low dimensional space. We then take a closer look at end-to-end model-based methods in to better understand their properties. We do this by examining this type of approach through the lens of constrained optimization and implicit differentiation. Through the implicit perspective, we derive properties of these methods which allow us to identify conditions under which they perform well. We conclude this thesis by exploring how the sparsity of effects of classical planning problems can used to define general domain-independent heuristics which we can be used to greatly accelerate learning of domain-dependent heuristics through the use of potential-based reward shaping and lifted function approximation.

Book Sample Efficient Nonconvex Optimization Algorithms in Machine Learning and Reinforcement Learning

Download or read book Sample Efficient Nonconvex Optimization Algorithms in Machine Learning and Reinforcement Learning written by Pan Xu and published by . This book was released on 2021 with total page 246 pages. Available in PDF, EPUB and Kindle. Book excerpt: Machine learning and reinforcement learning have achieved tremendous success in solving problems in various real-world applications. Many modern learning problems boil down to a nonconvex optimization problem, where the objective function is the average or the expectation of some loss function over a finite or infinite dataset. Solving such nonconvex optimization problems, in general, can be NP-hard. Thus one often tackles such a problem through incremental steps based on the nature and the goal of the problem: finding a first-order stationary point, finding a second-order stationary point (or a local optimum), and finding a global optimum. With the size and complexity of the machine learning datasets rapidly increasing, it has become a fundamental challenge to design efficient and scalable machine learning algorithms that can improve the performance in terms of accuracy and save computational cost in terms of sample efficiency at the same time. Though many algorithms based on stochastic gradient descent have been developed and widely studied theoretically and empirically for nonconvex optimization, it has remained an open problem whether we can achieve the optimal sample complexity for finding a first-order stationary point and for finding local optima in nonconvex optimization. In this thesis, we start with the stochastic nested variance reduced gradient (SNVRG) algorithm, which is developed based on stochastic gradient descent methods and variance reduction techniques. We prove that SNVRG achieves the near-optimal convergence rate among its type for finding a first-order stationary point of a nonconvex function. We further build algorithms to efficiently find the local optimum of a nonconvex objective function by examining the curvature information at the stationary point found by SNVRG. With the ultimate goal of finding the global optimum in nonconvex optimization, we then provide a unified framework to analyze the global convergence of stochastic gradient Langevin dynamics-based algorithms for a nonconvex objective function. In the second part of this thesis, we generalize the aforementioned sample-efficient stochastic nonconvex optimization methods to reinforcement learning problems, including policy gradient, actor-critic, and Q-learning. For these problems, we propose novel algorithms and prove that they enjoy state-of-the-art theoretical guarantees on the sample complexity. The works presented in this thesis form an incomplete collection of the recent advances and developments of sample-efficient nonconvex optimization algorithms for both machine learning and reinforcement learning.