EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book On the Uncertainty Estimation of Neural Networks

Download or read book On the Uncertainty Estimation of Neural Networks written by Yukun Ding and published by . This book was released on 2021 with total page 135 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Bayesian Learning for Neural Networks

Download or read book Bayesian Learning for Neural Networks written by Radford M. Neal and published by Springer Science & Business Media. This book was released on 2012-12-06 with total page 194 pages. Available in PDF, EPUB and Kindle. Book excerpt: Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence.

Book Uncertainty Estimation and Its Applications in Computer Vision

Download or read book Uncertainty Estimation and Its Applications in Computer Vision written by Özgün Çiçek and published by . This book was released on 2021* with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Abstract: Deep learning has become the common practice for most computer vision tasks due to being state-of-the-art in both accuracy and runtime. It has not only revolutionized machine learning and computer vision, but also moved us a big step closer to artificial intelligence. Due to its success, besides academia, it has already been part of many industrial and clinical solutions; from autonomous driving to Instagram's cyberbullying detection, from Twitter's tweet curation to Heineken's data-driven marketing, from music generation to augmenting/replacing radiologists to detect cancer in Computed Tomography scans. While deployment of deep learning approaches is straight-forward as better ones are being developed, for safety-critical applications a big challenge remains: estimating uncertainty. In autonomous systems like self-driving cars, it is of great importance that the system knows when it does not know e.g. heavy rain obscures the vision, the car was trained for highway but is now at Arc de Triomphe in Paris or a koala runs to the street due to a bush fire. Uncertainty estimation not only enables us to quantify the reliability of a decision coming from a system, but when modeled fully, also enables us to solve nondeterministic tasks with more than one possible outcome, such as future prediction: an important aspect of human intelligence. Once we have a reasonable estimation for the uncertainty of a subsystem, another challenge is to make good use of this new data modality by propagating it properly through chains of subsystems to improve the result of the whole system. This thesis starts with a general presentation of the value of deep learning in medical image segmentation. Then, it continues by equipping modern convolutional neural networks with uncertainty estimation in show-cases of optical flow and future localization. Finally, it uses the uncertainty estimation to improve network predictions in tracking cell nuclei tagged with a dynamic protein and future captioning

Book Neural Networks for Conditional Probability Estimation

Download or read book Neural Networks for Conditional Probability Estimation written by Dirk Husmeier and published by Springer Science & Business Media. This book was released on 2012-12-06 with total page 280 pages. Available in PDF, EPUB and Kindle. Book excerpt: Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.

Book Uncertainties in Neural Networks

    Book Details:
  • Author : Magnus Malmström
  • Publisher : Linköping University Electronic Press
  • Release : 2021-04-06
  • ISBN : 9179296807
  • Pages : 103 pages

Download or read book Uncertainties in Neural Networks written by Magnus Malmström and published by Linköping University Electronic Press. This book was released on 2021-04-06 with total page 103 pages. Available in PDF, EPUB and Kindle. Book excerpt: In science, technology, and engineering, creating models of the environment to predict future events has always been a key component. The models could be everything from how the friction of a tire depends on the wheels slip to how a pathogen is spread throughout society. As more data becomes available, the use of data-driven black-box models becomes more attractive. In many areas they have shown promising results, but for them to be used widespread in safety-critical applications such as autonomous driving some notion of uncertainty in the prediction is required. An example of such a black-box model is neural networks (NNs). This thesis aims to increase the usefulness of NNs by presenting an method where uncertainty in the prediction is obtained by linearization of the model. In system identification and sensor fusion, under the condition that the model structure is identifiable, this is a commonly used approach to get uncertainty in the prediction from a nonlinear model. If the model structure is not identifiable, such as for NNs, the ambiguities that cause this have to be taken care of in order to make the approach applicable. This is handled in the first part of the thesis where NNs are analyzed from a system identification perspective, and sources of uncertainty are discussed. Another problem with data-driven black-box models is that it is difficult to know how flexible the model needs to be in order to correctly model the true system. One solution to this problem is to use a model that is more flexible than necessary to make sure that the model is flexible enough. But how would that extra flexibility affect the uncertainty in the prediction? This is handled in the later part of the thesis where it is shown that the uncertainty in the prediction is bounded from below by the uncertainty in the prediction of the model with lowest flexibility required for representing true system accurately. In the literature, many other approaches to handle the uncertainty in predictions by NNs have been suggested, of which some are summarized in this work. Furthermore, a simulation and an experimental studies inspired by autonomous driving are conducted. In the simulation study, different sources of uncertainty are investigated, as well as how large the uncertainty in the predictions by NNs are in areas without training data. In the experimental study, the uncertainty in predictions done by different models are investigated. The results show that, compared to existing methods, the linearization method produces similar results for the uncertainty in predictions by NNs. An introduction video is available at https://youtu.be/O4ZcUTGXFN0 Inom forskning och utveckling har det har alltid varit centralt att skapa modeller av verkligheten. Dessa modeller har bland annat använts till att förutspå framtida händelser eller för att styra ett system till att bete sig som man önskar. Modellerna kan beskriva allt från hur friktionen hos ett bildäck påverkas av hur mycket hjulen glider till hur ett virus kan sprida sig i ett samhälle. I takt med att mer och mer data blir tillgänglig ökar potentialen för datadrivna black-box modeller. Dessa modeller är universella approximationer vilka ska kunna representera vilken godtycklig funktion som helst. Användningen av dessa modeller har haft stor framgång inom många områden men för att verkligen kunna etablera sig inom säkerhetskritiska områden såsom självkörande farkoster behövs en förståelse för osäkerhet i prediktionen från modellen. Neuronnät är ett exempel på en sådan black-box modell. I denna avhandling kommer olika sätt att tillförskaffa sig kunskap om osäkerhet i prediktionen av neuronnät undersökas. En metod som bygger på linjärisering av modellen för att tillförskaffa sig osäkerhet i prediktionen av neuronnätet kommer att presenteras. Denna metod är välbeprövad inom systemidentifiering och sensorfusion under antagandet att modellen är identifierbar. För modeller såsom neuronnät, vilka inte är identifierbara behövs det att det tas hänsyn till tvetydigheterna i modellen. En annan utmaning med datadrivna black-box modeller, är att veta om den valda modellmängden är tillräckligt generell för att kunna modellera det sanna systemet. En lösning på detta problem är att använda modeller som har mer flexibilitet än vad som behövs, det vill säga en överparameteriserad modell. Men hur påverkas osäkerheten i prediktionen av detta? Detta är något som undersöks i denna avhandling, vilken visar att osäkerheten i den överparameteriserad modellen kommer att vara begränsad underifrån av modellen med minst flexibilitet som ändå är tillräckligt generell för att modellera det sanna systemet. Som avslutning kommer dessa resultat att demonstreras i både en simuleringsstudie och en experimentstudie inspirerad av självkörande farkoster. Fokuset i simuleringsstudien är hur osäkerheten hos modellen är i områden med och utan tillgång till träningsdata medan experimentstudien fokuserar på jämförelsen mellan osäkerheten i olika typer av modeller.Resultaten från dessa studier visar att metoden som bygger på linjärisering ger liknande resultat för skattningen av osäkerheten i prediktionen av neuronnät, jämfört med existerande metoder.

Book Medical Image Computing and Computer Assisted Intervention     MICCAI 2019

Download or read book Medical Image Computing and Computer Assisted Intervention MICCAI 2019 written by Dinggang Shen and published by Springer Nature. This book was released on 2019-10-10 with total page 809 pages. Available in PDF, EPUB and Kindle. Book excerpt: The six-volume set LNCS 11764, 11765, 11766, 11767, 11768, and 11769 constitutes the refereed proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019, held in Shenzhen, China, in October 2019. The 539 revised full papers presented were carefully reviewed and selected from 1730 submissions in a double-blind review process. The papers are organized in the following topical sections: Part I: optical imaging; endoscopy; microscopy. Part II: image segmentation; image registration; cardiovascular imaging; growth, development, atrophy and progression. Part III: neuroimage reconstruction and synthesis; neuroimage segmentation; diffusion weighted magnetic resonance imaging; functional neuroimaging (fMRI); miscellaneous neuroimaging. Part IV: shape; prediction; detection and localization; machine learning; computer-aided diagnosis; image reconstruction and synthesis. Part V: computer assisted interventions; MIC meets CAI. Part VI: computed tomography; X-ray imaging.

Book Uncertainty Estimation in Continuous Models Applied to Reinforcement Learning

Download or read book Uncertainty Estimation in Continuous Models Applied to Reinforcement Learning written by Ibrahim Akbar and published by . This book was released on 2019 with total page 86 pages. Available in PDF, EPUB and Kindle. Book excerpt: We consider the model-based reinforcement learning framework where we are interested in learning a model and control policy for a given objective. We consider modeling the dynamics of an environment using Gaussian Processes or a Bayesian neural network. For Bayesian neural networks we must define how to estimate uncertainty through a neural network and propagate distributions in time. Once we have a continuous model we can apply standard optimal control techniques to learn a policy. We consider the policy to be a radial basis policy and compare it's performance given the different models on a pendulum environment.

Book Large scale Kernel Machines

Download or read book Large scale Kernel Machines written by Léon Bottou and published by MIT Press. This book was released on 2007 with total page 409 pages. Available in PDF, EPUB and Kindle. Book excerpt: Solutions for learning from large scale datasets, including kernel learning algorithms that scale linearly with the volume of the data and experiments carried out on realistically large datasets. Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically. Contributors Léon Bottou, Yoshua Bengio, Stéphane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Gaëlle Loosli, Joaquin Quiñonero-Candela, Carl Edward Rasmussen, Gunnar Rätsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Sören Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-Tov

Book Advanced Neural Network Based Computational Schemes for Robust Fault Diagnosis

Download or read book Advanced Neural Network Based Computational Schemes for Robust Fault Diagnosis written by Marcin Mrugalski and published by Springer. This book was released on 2013-08-04 with total page 196 pages. Available in PDF, EPUB and Kindle. Book excerpt: The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practical applications.

Book Mixture Density Networks for Distribution and Uncertainty Estimation

Download or read book Mixture Density Networks for Distribution and Uncertainty Estimation written by Axel Brando Guillaumes and published by . This book was released on 2017 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: The deep learning techniques have made neural networks the leading option for solving some computational problems and it has been shown the production of the state-of-the-art results in many fields like computer vision, automatic speech recognition, natural language processing, and audio recognition. In fact, we may be tempted to make use of neural networks directly, as we know them nowadays, in order to make predictions and solve many problems, but if the decision that has to be taken is of high risk. For instance, we could have a problem regarding the control of a nuclear power plant or the prediction of the shares evolution in the market; in this case, it would be important to look for methods that allowed us to add more information concerning the certainty of those predictions. This Master's thesis is divided into three parts: Firstly, we will analyse the state-of-the-art regarding Mixture Density Network models to predict an entire probability distribution for the output and we will develop an implementation to give solutions for many of the numerical stability problems that characterise this type of models. Secondly, in order to propose an initial solution for the uncertainty problems introduced above, we will focus on the extraction of a confidence factor by using neural network outputs of a problem for which we are only interested in the prediction of something if we have a minimum certainty about the prediction we made. In order to do it, we will compile the current literature methods to measure uncertainty through Mixture Density Networks and we will implement all of these works. Consequently, we are going to to go into detail about the concept of uncertainty and we will see to what extent we are able to propose a solution by using neural network models for the different aspects that include such concept. Finally, the third part will refer to several proposals to measure the confidence factor obtained with the use of Mixture Density Network concerning the problem proposed. After all the work, our goals will be achieved: we are going to make a stable implementation for all the problems that we have proposed for Mixture Density Networks and we will publish it publicly in our GitHub repository[9]. We will be able to implement the state-of-theart methods that will allow us to obtain a confidence factor and finally we will be able to propose a method that obtains the expected results regarding the parameters that represent the confidence factor.

Book Database Systems for Advanced Applications

Download or read book Database Systems for Advanced Applications written by Shamkant B. Navathe and published by Springer. This book was released on 2016-03-24 with total page 560 pages. Available in PDF, EPUB and Kindle. Book excerpt: This two volume set LNCS 9642 and LNCS 9643 constitutes the refereed proceedings of the 21st International Conference on Database Systems for Advanced Applications, DASFAA 2016, held in Dallas, TX, USA, in April 2016. The 61 full papers presented were carefully reviewed and selected from a total of 183 submissions. The papers cover the following topics: crowdsourcing, data quality, entity identification, data mining and machine learning, recommendation, semantics computing and knowledge base, textual data, social networks, complex queries, similarity computing, graph databases, and miscellaneous, advanced applications.

Book Advances in Intelligent Data Analysis XVIII

Download or read book Advances in Intelligent Data Analysis XVIII written by Michael R. Berthold and published by Springer. This book was released on 2020-04-02 with total page 588 pages. Available in PDF, EPUB and Kindle. Book excerpt: This open access book constitutes the proceedings of the 18th International Conference on Intelligent Data Analysis, IDA 2020, held in Konstanz, Germany, in April 2020. The 45 full papers presented in this volume were carefully reviewed and selected from 114 submissions. Advancing Intelligent Data Analysis requires novel, potentially game-changing ideas. IDA’s mission is to promote ideas over performance: a solid motivation can be as convincing as exhaustive empirical evaluation.

Book Uncertainty Analysis of Artificial Neural Network  ANN  Aproximated Function for Experimental Data Using Sequential Perturbation Method

Download or read book Uncertainty Analysis of Artificial Neural Network ANN Aproximated Function for Experimental Data Using Sequential Perturbation Method written by Mohd Jukimi Joni and published by . This book was released on 2009 with total page 64 pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis describes a comparative study of uncertainty estimation for unknown function using sequential perturbation method with Artificial Neural Network (ANN) approximated function. The objective of this project is to propose a new technique in calculating uncertainty estimation for an unknown function which is data obtains from experimental or measurement. For this research of the uncertainty analysis can be applied to calculate uncertainty value for the experiment data that not have function. The process to determine uncertainty have six step including begin from selected experiment function, generate the experiment data, function approximation using ANN, calculate the uncertainty for analytical method manually, applied the sequential perturbation method with ANN and lastly determine percent error between sequential perturbation method with ANN compare with the analytical method. Meanwhile, the variation of uncertainty error for Sequential Perturbation method without ANN is 0.0510%, but the error of sequential perturbation method with The ANN is 0.1559%. Then compare the value of Sequential Perturbation (numerical) method with ANN and value of Analytical method to validate the data. The new technique will be approving to determine the uncertainty analysis using combination of Sequential Perturbation method with artificial neural network (ANN). Any experiment also can be use, the applications of Sequential Perturbation method with ANN propose in this study. Consequently it implies the application of Sequential Perturbation method is a good as the application of the analytical method in order to calculate the propagation of uncertainty.

Book Deep Neural Networks with Contextual Probabilistic Units

Download or read book Deep Neural Networks with Contextual Probabilistic Units written by Xinjie Fan and published by . This book was released on 2021 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep neural networks (NNs) have become ubiquitous and achieved state-of-the-art results in a wide variety of research fields. Unlike the traditional machine learning techniques that require hand-crafted feature extractors to transform raw data, deep learning methods are able to automatically learn useful representations by exploiting the data. Despite the great success of deep learning methods, there are still many challenges in front of us. In this thesis, we propose new contextual probabilistic units to make progress along three directions in deep learning, including uncertainty estimation, generalization, and optimization. Unlike traditional probabilistic models that learn a distribution of predictions, deep learning models, composed of deterministic mappings, often only give us point estimates of predictions, lacking a sense of uncertainty. Dropout is an effective probabilistic unit to estimate uncertainty for neural networks. However, the quality of uncertainty estimation depends heavily on the dropout probabilities. Existing methods treat dropout probabilities as global parameters shared across all data samples. We introduce contextual dropout, a sample-dependent dropout, where we consider parameterizing dropout probabilities as a function of input covariates. This generalization could greatly enhance the neural network's capability of modeling uncertainty and bridge the gap between traditional probabilistic models and deep neural networks. To obtain uncertainty estimation for attention neural networks, we propose Bayesian attention modules where the attention weights are related to continuous latent alignment random variables dependent on the contextual information and learned in a probabilistic manner. The whole training process can be made differentiable via the reparameterization trick. Our method is able to capture complicated probabilistic dependencies as well as obtain better uncertainty estimation than previous methods while maintaining scalability. Deep NNs learn the representations from data in an implicit way, making them prone to learning features that do not generalize across domains. We study the impact on domain generalization from transferring the training-domain statistics to the testing domain in the normalization layer. We propose a novel normalization approach to learn both the standardization and rescaling statistics via neural networks, transforming input features to useful contextual statistics. This new form of normalization can be viewed as a generic form of the traditional normalizations. The statistics are learned to be adaptive to the data coming from different domains, and hence improve the model generalization performance across domains. Stochastic gradient descent has achieved great success in optimizing deterministic neural networks. However, standard backpropagation no longer applies to the training process of neural networks with stochastic latent variables and one often resorts to a REINFORCE gradient estimator, which has large variance. We address this issue on challenging contextual categorical sequence generation tasks, where the learning signal is noisy and/or sparse and the learning space is exponentially large. We adapt the ARSM estimator to our solution, using correlated Monte Carlo rollouts to reduce gradient variances. Our methods show significant reduction of gradient variance and consistently outperform related baselines

Book Enhancing Deep Learning with Bayesian Inference

Download or read book Enhancing Deep Learning with Bayesian Inference written by Matt Benatan and published by Packt Publishing Ltd. This book was released on 2023-06-30 with total page 386 pages. Available in PDF, EPUB and Kindle. Book excerpt: Develop Bayesian Deep Learning models to help make your own applications more robust. Key Features Gain insights into the limitations of typical neural networks Acquire the skill to cultivate neural networks capable of estimating uncertainty Discover how to leverage uncertainty to develop more robust machine learning systems Book Description Deep learning has an increasingly significant impact on our lives, from suggesting content to playing a key role in mission- and safety-critical applications. As the influence of these algorithms grows, so does the concern for the safety and robustness of the systems which rely on them. Simply put, typical deep learning methods do not know when they don't know. The field of Bayesian Deep Learning contains a range of methods for approximate Bayesian inference with deep networks. These methods help to improve the robustness of deep learning systems as they tell us how confident they are in their predictions, allowing us to take more care in how we incorporate model predictions within our applications. Through this book, you will be introduced to the rapidly growing field of uncertainty-aware deep learning, developing an understanding of the importance of uncertainty estimation in robust machine learning systems. You will learn about a variety of popular Bayesian Deep Learning methods, and how to implement these through practical Python examples covering a range of application scenarios. By the end of the book, you will have a good understanding of Bayesian Deep Learning and its advantages, and you will be able to develop Bayesian Deep Learning models for safer, more robust deep learning systems. What you will learn Understand advantages and disadvantages of Bayesian inference and deep learning Understand the fundamentals of Bayesian Neural Networks Understand the differences between key BNN implementations/approximations Understand the advantages of probabilistic DNNs in production contexts How to implement a variety of BDL methods in Python code How to apply BDL methods to real-world problems Understand how to evaluate BDL methods and choose the best method for a given task Learn how to deal with unexpected data in real-world deep learning applications Who this book is for This book will cater to researchers and developers looking for ways to develop more robust deep learning models through probabilistic deep learning. You're expected to have a solid understanding of the fundamentals of machine learning and probability, along with prior experience working with machine learning and deep learning models.

Book Advances in Neural Information Processing Systems 9

Download or read book Advances in Neural Information Processing Systems 9 written by Michael C. Mozer and published by MIT Press. This book was released on 1997 with total page 1128 pages. Available in PDF, EPUB and Kindle. Book excerpt: The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes neural networks and genetic algorithms, cognitive science, neuroscience and biology, computer science, AI, applied mathematics, physics, and many branches of engineering. Only about 30% of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. All of the papers presented appear in these proceedings.

Book Ensemble Methods

Download or read book Ensemble Methods written by Zhi-Hua Zhou and published by CRC Press. This book was released on 2012-06-06 with total page 238 pages. Available in PDF, EPUB and Kindle. Book excerpt: An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity and bias-variance decompositions, and recent progress in information theoretic diversity. Moving on to more advanced topics, the author explains how to achieve better performance through ensemble pruning and how to generate better clustering results by combining multiple clusterings. In addition, he describes developments of ensemble methods in semi-supervised learning, active learning, cost-sensitive learning, class-imbalance learning, and comprehensibility enhancement.