Download or read book Discrete Choice Methods with Simulation written by Kenneth Train and published by Cambridge University Press. This book was released on 2009-07-06 with total page 399 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.
Download or read book Modeling Ordered Choices written by William H. Greene and published by Cambridge University Press. This book was released on 2010-04-08 with total page 383 pages. Available in PDF, EPUB and Kindle. Book excerpt: It is increasingly common for analysts to seek out the opinions of individuals and organizations using attitudinal scales such as degree of satisfaction or importance attached to an issue. Examples include levels of obesity, seriousness of a health condition, attitudes towards service levels, opinions on products, voting intentions, and the degree of clarity of contracts. Ordered choice models provide a relevant methodology for capturing the sources of influence that explain the choice made amongst a set of ordered alternatives. The methods have evolved to a level of sophistication that can allow for heterogeneity in the threshold parameters, in the explanatory variables (through random parameters), and in the decomposition of the residual variance. This book brings together contributions in ordered choice modeling from a number of disciplines, synthesizing developments over the last fifty years, and suggests useful extensions to account for the wide range of sources of influence on choice.
Download or read book Bayesian Networks written by Olivier Pourret and published by John Wiley & Sons. This book was released on 2008-04-30 with total page 446 pages. Available in PDF, EPUB and Kindle. Book excerpt: Bayesian Networks, the result of the convergence of artificial intelligence with statistics, are growing in popularity. Their versatility and modelling power is now employed across a variety of fields for the purposes of analysis, simulation, prediction and diagnosis. This book provides a general introduction to Bayesian networks, defining and illustrating the basic concepts with pedagogical examples and twenty real-life case studies drawn from a range of fields including medicine, computing, natural sciences and engineering. Designed to help analysts, engineers, scientists and professionals taking part in complex decision processes to successfully implement Bayesian networks, this book equips readers with proven methods to generate, calibrate, evaluate and validate Bayesian networks. The book: Provides the tools to overcome common practical challenges such as the treatment of missing input data, interaction with experts and decision makers, determination of the optimal granularity and size of the model. Highlights the strengths of Bayesian networks whilst also presenting a discussion of their limitations. Compares Bayesian networks with other modelling techniques such as neural networks, fuzzy logic and fault trees. Describes, for ease of comparison, the main features of the major Bayesian network software packages: Netica, Hugin, Elvira and Discoverer, from the point of view of the user. Offers a historical perspective on the subject and analyses future directions for research. Written by leading experts with practical experience of applying Bayesian networks in finance, banking, medicine, robotics, civil engineering, geology, geography, genetics, forensic science, ecology, and industry, the book has much to offer both practitioners and researchers involved in statistical analysis or modelling in any of these fields.
Download or read book Approximate Dynamic Programming written by Warren B. Powell and published by John Wiley & Sons. This book was released on 2007-10-05 with total page 487 pages. Available in PDF, EPUB and Kindle. Book excerpt: A complete and accessible introduction to the real-world applications of approximate dynamic programming With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. Approximate Dynamic Programming is a result of the author's decades of experience working in large industrial settings to develop practical and high-quality solutions to problems that involve making decisions in the presence of uncertainty. This groundbreaking book uniquely integrates four distinct disciplines—Markov design processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully model and solve a wide range of real-life problems using the techniques of approximate dynamic programming (ADP). The reader is introduced to the three curses of dimensionality that impact complex problems and is also shown how the post-decision state variable allows for the use of classical algorithmic strategies from operations research to treat complex stochastic optimization problems. Designed as an introduction and assuming no prior training in dynamic programming of any form, Approximate Dynamic Programming contains dozens of algorithms that are intended to serve as a starting point in the design of practical solutions for real problems. The book provides detailed coverage of implementation challenges including: modeling complex sequential decision processes under uncertainty, identifying robust policies, designing and estimating value function approximations, choosing effective stepsize rules, and resolving convergence issues. With a focus on modeling and algorithms in conjunction with the language of mainstream operations research, artificial intelligence, and control theory, Approximate Dynamic Programming: Models complex, high-dimensional problems in a natural and practical way, which draws on years of industrial projects Introduces and emphasizes the power of estimating a value function around the post-decision state, allowing solution algorithms to be broken down into three fundamental steps: classical simulation, classical optimization, and classical statistics Presents a thorough discussion of recursive estimation, including fundamental theory and a number of issues that arise in the development of practical algorithms Offers a variety of methods for approximating dynamic programs that have appeared in previous literature, but that have never been presented in the coherent format of a book Motivated by examples from modern-day operations research, Approximate Dynamic Programming is an accessible introduction to dynamic modeling and is also a valuable guide for the development of high-quality solutions to problems that exist in operations research and engineering. The clear and precise presentation of the material makes this an appropriate text for advanced undergraduate and beginning graduate courses, while also serving as a reference for researchers and practitioners. A companion Web site is available for readers, which includes additional exercises, solutions to exercises, and data sets to reinforce the book's main concepts.
Download or read book Current Index to Statistics Applications Methods and Theory written by and published by . This book was released on 1994 with total page 788 pages. Available in PDF, EPUB and Kindle. Book excerpt: The Current Index to Statistics (CIS) is a bibliographic index of publications in statistics, probability, and related fields.
Download or read book Microeconometrics written by A. Colin Cameron and published by Cambridge University Press. This book was released on 2005-05-09 with total page 1058 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides the most comprehensive treatment to date of microeconometrics, the analysis of individual-level data on the economic behavior of individuals or firms using regression methods for cross section and panel data. The book is oriented to the practitioner. A basic understanding of the linear regression model with matrix algebra is assumed. The text can be used for a microeconometrics course, typically a second-year economics PhD course; for data-oriented applied microeconometrics field courses; and as a reference work for graduate students and applied researchers who wish to fill in gaps in their toolkit. Distinguishing features of the book include emphasis on nonlinear models and robust inference, simulation-based estimation, and problems of complex survey data. The book makes frequent use of numerical examples based on generated data to illustrate the key models and methods. More substantially, it systematically integrates into the text empirical illustrations based on seven large and exceptionally rich data sets.
Download or read book The Limits of Inference without Theory written by Kenneth I. Wolpin and published by MIT Press. This book was released on 2013-04-26 with total page 197 pages. Available in PDF, EPUB and Kindle. Book excerpt: The role of theory in ex ante policy evaluations and the limits that eschewing theory places on inference In this rigorous and well-crafted work, Kenneth Wolpin examines the role of theory in inferential empirical work in economics and the social sciences in general—that is, any research that uses raw data to go beyond the mere statement of fact or the tabulation of statistics. He considers in particular the limits that eschewing the use of theory places on inference. Wolpin finds that the absence of theory in inferential work that addresses microeconomic issues is pervasive. That theory is unnecessary for inference is exemplified by the expression “let the data speak for themselves.” This approach is often called “reduced form.” A more nuanced view is based on the use of experiments or quasi-experiments to draw inferences. Atheoretical approaches stand in contrast to what is known as the structuralist approach, which requires that a researcher specify an explicit model of economic behavior—that is, a theory. Wolpin offers a rigorous examination of both structuralist and nonstructuralist approaches. He first considers ex ante policy evaluation, highlighting the role of theory in the implementation of parametric and nonparametric estimation strategies. He illustrates these strategies with two examples, a wage tax and a school attendance subsidy, and summarizes the results from applications. He then presents a number of examples that illustrate the limits of inference without theory: the effect of unemployment benefits on unemployment duration; the effect of public welfare on women's labor market and demographic outcomes; the effect of school attainment on earnings; and a famous field experiment in education dealing with class size. Placing each example within the context of the broader literature, he contrasts them to recent work that relies on theory for inference.
Download or read book Accelerating Monte Carlo methods for Bayesian inference in dynamical models written by Johan Dahlin and published by Linköping University Electronic Press. This book was released on 2016-03-22 with total page 139 pages. Available in PDF, EPUB and Kindle. Book excerpt: Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal. Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
Download or read book Introduction to Small Area Estimation Techniques written by Asian Development Bank and published by Asian Development Bank. This book was released on 2020-05-01 with total page 152 pages. Available in PDF, EPUB and Kindle. Book excerpt: This guide to small area estimation aims to help users compile more reliable granular or disaggregated data in cost-effective ways. It explains small area estimation techniques with examples of how the easily accessible R analytical platform can be used to implement them, particularly to estimate indicators on poverty, employment, and health outcomes. The guide is intended for staff of national statistics offices and for other development practitioners. It aims to help them to develop and implement targeted socioeconomic policies to ensure that the vulnerable segments of societies are not left behind, and to monitor progress toward the Sustainable Development Goals.
Download or read book Reinforcement Learning and Dynamic Programming Using Function Approximators written by Lucian Busoniu and published by CRC Press. This book was released on 2017-07-28 with total page 280 pages. Available in PDF, EPUB and Kindle. Book excerpt: From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.
Download or read book Handbook of Labor Economics written by Orley Ashenfelter and published by . This book was released on 1986 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:
Download or read book Bayesian Reasoning and Machine Learning written by David Barber and published by Cambridge University Press. This book was released on 2012-02-02 with total page 739 pages. Available in PDF, EPUB and Kindle. Book excerpt: A practical introduction perfect for final-year undergraduate and graduate students without a solid background in linear algebra and calculus.
Download or read book Biological Sequence Analysis written by Richard Durbin and published by Cambridge University Press. This book was released on 1998-04-23 with total page 372 pages. Available in PDF, EPUB and Kindle. Book excerpt: Probabilistic models are becoming increasingly important in analysing the huge amount of data being produced by large-scale DNA-sequencing efforts such as the Human Genome Project. For example, hidden Markov models are used for analysing biological sequences, linguistic-grammar-based probabilistic models for identifying RNA secondary structure, and probabilistic evolutionary models for inferring phylogenies of sequences from different organisms. This book gives a unified, up-to-date and self-contained account, with a Bayesian slant, of such methods, and more generally to probabilistic methods of sequence analysis. Written by an interdisciplinary team of authors, it aims to be accessible to molecular biologists, computer scientists, and mathematicians with no formal knowledge of the other fields, and at the same time present the state-of-the-art in this new and highly important field.
Download or read book Handbook of Marketing Decision Models written by Berend Wierenga and published by Springer Science & Business Media. This book was released on 2008-09-05 with total page 621 pages. Available in PDF, EPUB and Kindle. Book excerpt: Marketing models is a core component of the marketing discipline. The recent developments in marketing models have been incredibly fast with information technology (e.g., the Internet), online marketing (e-commerce) and customer relationship management (CRM) creating radical changes in the way companies interact with their customers. This has created completely new breeds of marketing models, but major progress has also taken place in existing types of marketing models. Handbook of Marketing Decision Models presents the state of the art in marketing decision models. The book deals with new modeling areas, such as customer relationship management, customer value and online marketing, as well as recent developments in other advertising, sales promotions, sales management, and competition are dealt with. New developments are in consumer decision models, models for return on marketing, marketing management support systems, and in special techniques such as time series and neural nets.
Download or read book Reinforcement Learning second edition written by Richard S. Sutton and published by MIT Press. This book was released on 2018-11-13 with total page 549 pages. Available in PDF, EPUB and Kindle. Book excerpt: The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.
Download or read book Bayesian Evolutionary Analysis with BEAST written by Alexei J. Drummond and published by Cambridge University Press. This book was released on 2015-08-06 with total page 263 pages. Available in PDF, EPUB and Kindle. Book excerpt: What are the models used in phylogenetic analysis and what exactly is involved in Bayesian evolutionary analysis using Markov chain Monte Carlo (MCMC) methods? How can you choose and apply these models, which parameterisations and priors make sense, and how can you diagnose Bayesian MCMC when things go wrong? These are just a few of the questions answered in this comprehensive overview of Bayesian approaches to phylogenetics. This practical guide: • Addresses the theoretical aspects of the field • Advises on how to prepare and perform phylogenetic analysis • Helps with interpreting analyses and visualisation of phylogenies • Describes the software architecture • Helps developing BEAST 2.2 extensions to allow these models to be extended further. With an accompanying website providing example files and tutorials (http://beast2.org/), this one-stop reference to applying the latest phylogenetic models in BEAST 2 will provide essential guidance for all users – from those using phylogenetic tools, to computational biologists and Bayesian statisticians.
Download or read book Reinforcement Learning and Stochastic Optimization written by Warren B. Powell and published by John Wiley & Sons. This book was released on 2022-03-15 with total page 1090 pages. Available in PDF, EPUB and Kindle. Book excerpt: REINFORCEMENT LEARNING AND STOCHASTIC OPTIMIZATION Clearing the jungle of stochastic optimization Sequential decision problems, which consist of “decision, information, decision, information,” are ubiquitous, spanning virtually every human activity ranging from business applications, health (personal and public health, and medical decision making), energy, the sciences, all fields of engineering, finance, and e-commerce. The diversity of applications attracted the attention of at least 15 distinct fields of research, using eight distinct notational systems which produced a vast array of analytical tools. A byproduct is that powerful tools developed in one community may be unknown to other communities. Reinforcement Learning and Stochastic Optimization offers a single canonical framework that can model any sequential decision problem using five core components: state variables, decision variables, exogenous information variables, transition function, and objective function. This book highlights twelve types of uncertainty that might enter any model and pulls together the diverse set of methods for making decisions, known as policies, into four fundamental classes that span every method suggested in the academic literature or used in practice. Reinforcement Learning and Stochastic Optimization is the first book to provide a balanced treatment of the different methods for modeling and solving sequential decision problems, following the style used by most books on machine learning, optimization, and simulation. The presentation is designed for readers with a course in probability and statistics, and an interest in modeling and applications. Linear programming is occasionally used for specific problem classes. The book is designed for readers who are new to the field, as well as those with some background in optimization under uncertainty. Throughout this book, readers will find references to over 100 different applications, spanning pure learning problems, dynamic resource allocation problems, general state-dependent problems, and hybrid learning/resource allocation problems such as those that arose in the COVID pandemic. There are 370 exercises, organized into seven groups, ranging from review questions, modeling, computation, problem solving, theory, programming exercises and a “diary problem” that a reader chooses at the beginning of the book, and which is used as a basis for questions throughout the rest of the book.