EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Towards Deployment of Deep Neural Networks on Resource constrained Embedded Systems

Download or read book Towards Deployment of Deep Neural Networks on Resource constrained Embedded Systems written by Boyu Zhang and published by . This book was released on 2019 with total page 98 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep Neural Network (DNNs) have emerged as an important computational structure that facilitate important tasks such as speech and image recognition, autonomous vehicles, etc. In order to achieve better performance, such as higher classification accuracy, modern DNN models are designed to be more complex in terms of network structure and larger in terms of number of weights in the model. This imposes a great challenge for realizing DNN models on computation devices, especially those resource-constrained devices such as embedded and mobile systems. The challenge arises from three aspects: computation, memory, and energy consumption. First, the number of computations per inference required by modern large and complex DNN models is huge, whereas the computation capability available in the given systems may not be as powerful as a modern GPU or a dedicated processing unit. So, accomplishing the required computation within certain latency is an open challenge. Second, the conflict between the limited on-board memory resource and the static/run-time memory requirement of large DNN models also need to be resolved. Third, the very energy-consuming inference process places a heavy burden on edge devices' battery life. Since the majority of the total energy is consumed by data movement, the goal is not only to fit the DNN model into the system but also to optimize off-chip memory access in order to minimize energy consumption during inference. This dissertation aims to make contributions towards efficient realizations of DNN models on resource-constrained systems. Our contributions can be categorized into three aspects. First, we propose a structure simplification procedure that can identify and eliminate redundant neurons in any layer of a trained DNN model. Once the redundant neurons are identified and removed, the corresponding edges connected to those neurons will be eliminated as well. Then the new weight matrix is calculated directly by our procedure, while retraining may be applied to further recover the lost accuracy if necessary. We also propose a high-level energy model to better explore the tradeoffs in the design space during neuron elimination. Since both the neurons and their edges are eliminated, the memory and energy requirements are also get alleviated. Furthermore, the procedure also allows exploring the tradeoff between model performance and implementation cost. Second, since the convolutional layer is the most energy-consuming and computation heavy layer in Convolutional Neural Networks (CNNs), we propose a structural pruning technique to prune the input channels in convolutional layers. Once the redundant channels are identified and removed, the corresponding convolutional filters will be pruned as well. There significant reduction in static/run-time memory, computation, and energy consumption can be achieved. Moreover, the resulting pruned model is more efficient in terms of network architecture rather than specific weight values, which makes the theoretical reductions of implementation cost much easier to be harvested by existing hardware and software. Third, instead of blindly sending data to cloud and relying on cloud to perform inference, we propose to utilize the computation power of IoT devices to accomplish deep learning tasks while achieving higher degree of customization and privacy level. Specifically, we propose to incorporate a small-sized local customized DNN model to work with a large-sized general DNN model by using a "Mixture of Experts" architecture. Therefore, with minimal implementation overhead, the customized data can be handled by the small-sized DNN to achieve better performance without compromising the performance on general data. Our experiments show that the MoE architecture outperforms popular alternatives such as fine-tuning, bagging, independent ensemble, and multiple choice learning.

Book Embedded Deep Learning

Download or read book Embedded Deep Learning written by Bert Moons and published by Springer. This book was released on 2018-10-23 with total page 206 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning. Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices; Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes; Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations; Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.

Book Embedded Artificial Intelligence

Download or read book Embedded Artificial Intelligence written by Ovidiu Vermesan and published by CRC Press. This book was released on 2023-05-05 with total page 143 pages. Available in PDF, EPUB and Kindle. Book excerpt: Recent technological developments in sensors, edge computing, connectivity, and artificial intelligence (AI) technologies have accelerated the integration of data analysis based on embedded AI capabilities into resource-constrained, energy-efficient hardware devices for processing information at the network edge. Embedded AI combines embedded machine learning (ML) and deep learning (DL) based on neural networks (NN) architectures such as convolutional NN (CNN), or spiking neural network (SNN) and algorithms on edge devices and implements edge computing capabilities that enable data processing and analysis without optimised connectivity and integration, allowing users to access data from various sources. Embedded AI efficiently implements edge computing and AI processes on resource-constrained devices to mitigate downtime and service latency, and it successfully merges AI processes as a pivotal component in edge computing and embedded system devices. Embedded AI also enables users to reduce costs, communication, and processing time by assembling data and by supporting user requirements without the need for continuous interaction with physical locations. This book provides an overview of the latest research results and activities in industrial embedded AI technologies and applications, based on close cooperation between three large-scale ECSEL JU projects, AI4DI, ANDANTE, and TEMPO. The book’s content targets researchers, designers, developers, academics, post-graduate students and practitioners seeking recent research on embedded AI. It combines the latest developments in embedded AI, addressing methodologies, tools, and techniques to offer insight into technological trends and their use across different industries.

Book Efficient Implementation of Deep Neural Networks on Resource constrained Devices

Download or read book Efficient Implementation of Deep Neural Networks on Resource constrained Devices written by Maedeh Hemmat and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: In recent years, Deep Neural Networks (DNNs) have emerged as an impressively successful model to perform complicated tasks including object classification, speech recognition, autonomous vehicle, etc. To provide better accuracy, state-of-the-art neural network models are designed to be deeper (i.e., having more layers) and larger (i.e., having more parameters within each layer). It subsequently has increased the computational and memory costs of DNNs, mandating their efficient hardware implementation, especially on resource-constrained devices such as embedded systems and mobile devices. This challenge can be investigated from two aspects: computation and storage. On one hand, state-of-the-art DNNs require the execution of billions of operations for each inference. This is while the computational power of embedded systems is tightly limited. On the other hand, DNN models require storage of several Megabytes of parameters which can't fit in the on-chip memory of these devices. More importantly, these systems are usually battery-powered with a limited energy budget to access memory and perform computations.This dissertation aims to make contributions towards improving the efficiency of DNN deployments on resource-constraint devices. Our contributions can be categorized into three aspects. First, we propose an iterative framework that enables dynamic reconfiguration of an already-trained Convolutional Neural Network (CNN) in hardware during inference. The reconfiguration enables input-dependent approximation of the CNN at run-time, leading to significant energy savings without any significant degradation in classification accuracy. Our proposed framework breaks each inference into several iterations and fetches only a fraction of the weights from off-chip memory at each iteration to perform the computations. It then decides to either terminate the network or fetch more weights to do the inference, based on the difficulty of the received input. The termination condition can be also adjusted to trade off classification accuracy and energy consumption at run-time. Second, we exploit the user-dependent behavior of DNNs and propose a personalized inference framework that prunes an already-trained neural network model based on the preferences of individual users and without the need to retrain the network. Our key observation is that an individual user may only encounter a tiny fraction of the trained classes on a regular basis. Hence, storing trained models (pruned or not) for all possible classes on local devices is costly and unnecessary for the user's needs. Our personalized framework minimizes the memory, computation, and energy consumption of the network on the local device as it processes neurons on a need basis (i.e., only when the user expects to encounter a specific output class). Third, we propose a framework for distributed inference of DNNs across multiple edge devices to improve the communication and latency overheads. Our framework utilizes many parallel, independent-running edge devices which communicate only once to a single 'back-end' device (also an edge device) to aggregate their predictions and produce the result of the inference. To achieve this distributed implementation, our framework first partitions the classes of the complex DNN into subsets to be assigned across the available edge devices while considering the computational resources of each device. The DNN is then aggressively pruned for each device for its set of assigned classes. Each smaller DNN (SNN) is further configured to return a 'Don't Know' when encountered by an input from an unassigned class. Each SNN is generated from the complex DNN at the beginning and then loaded onto its corresponding edge device, without the need for retraining. To perform inference, each SNN will perform an inference based on its received input.

Book Resource Constrained Neural Architecture Design

Download or read book Resource Constrained Neural Architecture Design written by Yunyang Xiong and published by . This book was released on 2021 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep neural networks have been highly effective for a wide range of applications in computer vision, natural language processing, speech recognition, medical imaging, and biology. Large amounts of annotated data, dedicated deep learning computing hardware such as the NVIDIA GPU and Google TPU, and the innovative neural network architectures and algorithms have all contributed to rapid advances over the last decade. Despite the foregoing improvements, the ever-growing amount of compute and data resources needed for training neural networks (whose sizes are growing quickly) as well as a need for deploying these models on embedded devices call for designing deep neural networks under various types of resource constraints. For example, low latency and real-time response of deep neural networks can be critical for various applications. While the complexity of deep neural networks can be reduced by model compression, different applications with diverse resource constraints pose unique challenges for neural network architecture design. For instance, each type of device has its own hardware idiosyncrasies and requires different deep architectures to achieve the best accuracy-efficiency trade-off. Consequently, designing neural networks that are adaptive and scalable to applications with diverse resource requirements is not trivial. We need methods that are capable of addressing different application-specific challenges paying attention to: (1) problem type (e.g., classification, object detection, sentence prediction), (2) resource challenges (e.g., strict inference compute, memory, and latency constraint, limited training computational resources, small sample sizes in scientific/biomedical problems). In this dissertation, we describe algorithms that facilitate neural architecture design while effectively addressing application- and domain-specific resource challenges. For diverse application domains, we study neural architecture design strategies respecting different resource needs ranging from test time efficiency to training efficiency and sample efficiency. We show the effectiveness of these ideas for learning with smaller datasets as well as enabling the deployment of deep learning systems on embedded devices with limited computational resources which may enable reducing the environmental effects of using such models.

Book Deploying Deep Neural Networks with Resource Constraints

Download or read book Deploying Deep Neural Networks with Resource Constraints written by Theresa VanderWeide and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep neural networks (DNNs) have recently gained unprecedented success in various domains. In resource-constrained edge systems (e.g., mobile devices and IoT devices), QoS-aware DNNs are required to meet latency and memory/storage requirements of mission-critical deep learning applications. There is a growing need to deploy deep learning on resource constrained devices. In this thesis, we propose two solutions to this issue: BlinkNet, which is a runtime system that can guarantee both latency and memory/storage bounds for one or multiple DNNs via efficient QoS-aware per-layer approximation. And ParamExplorer, which evaluates hyperparameters of DNNs converted to Spiking Neural Networks (SNNs) and their effect on accuracy in comparison to the original DNN. ParamExplorer evaluates the search space and identifies an optimal hyperparameter configuration to reduce loss of accuracy.

Book Deploying Deep Neural Networks in Embedded Real time Systems

Download or read book Deploying Deep Neural Networks in Embedded Real time Systems written by Adam Page and published by . This book was released on 2016 with total page 256 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep neural networks have been shown to outperform prior state-of-the-art solutions that rely heavily on hand-engineered features coupled with simple classification techniques. In addition to achieving several orders of magnitude improvement, they offer a number of additional benefits such as the ability to perform end-to-end learning by performing both hierarchical feature abstraction and inference. Furthermore, their success continues to be demonstrated in a growing number of fields for a wide-range of applications, including computer vision, speech recognition, and model forecasting. As this area of machine learning matures, a major challenge that remains is the ability to efficiently deploy such deep networks in embedded, resource-bound settings that have strict power and area budgets. While GPUs have been shown to improve throughput and energy efficiency over traditional computing paradigms, they still impose significant power burden for such low-power embedded settings. In order to further reduce power while still achieving desired throughput and accuracy, classification-efficient networks are required in addition to optimal deployment onto embedded hardware.

Book International Conference on Innovative Computing and Communications

Download or read book International Conference on Innovative Computing and Communications written by Aboul Ella Hassanien and published by Springer Nature. This book was released on 2023-07-25 with total page 886 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book includes high-quality research papers presented at the Sixth International Conference on Innovative Computing and Communication (ICICC 2023), which is held at the Shaheed Sukhdev College of Business Studies, University of Delhi, Delhi, India, on February 17–18, 2023. Introducing the innovative works of scientists, professors, research scholars, students and industrial experts in the field of computing and communication, the book promotes the transformation of fundamental research into institutional and industrialized research and the conversion of applied exploration into real-time applications.

Book Embedded Machine Learning for Cyber Physical  IoT  and Edge Computing

Download or read book Embedded Machine Learning for Cyber Physical IoT and Edge Computing written by Sudeep Pasricha and published by Springer Nature. This book was released on 2023-10-09 with total page 481 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits. Discusses efficient implementation of machine learning in embedded, CPS, IoT, and edge computing; Offers comprehensive coverage of hardware design, software design, and hardware/software co-design and co-optimization; Describes real applications to demonstrate how embedded, CPS, IoT, and edge applications benefit from machine learning.

Book Towards Efficient Inference and Improved Training Efficiency of Deep Neural Networks

Download or read book Towards Efficient Inference and Improved Training Efficiency of Deep Neural Networks written by Ravi Shanker Raju (Ph.D.) and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: In recent years, deep neural networks have surpassed human performance on image classification tasks and and speech recognition. While current models can reach state of the art performance on stand-alone benchmarks, deploying them on embedded systems that have real-time latency deadlines either cause them to fail these requirements or severely get degraded in performance to meet the stated specifications. This requires intelligent design of the network architecture in order to minimize the accuracy degradation while deployed on the edge. Similarly, deep learning often has a long turn-around time due to the volume of the experiments on different hyperparameters and consumes time and resources. This motivates a need for developing training strategies that allow researchers who do not have access to large computational resources to train large models without waiting for exorbitant training cycles to be completed. This dissertation addresses these concerns through data dependent pruning of deep learning computation. First, regarding inference, we propose an integration of two different conditional execution strategies we call FBS-pruned CondConv by noticing that if we use input-specific filters instead of standard convolutional filters, we can aggressively prune at higher rates and mitigate accuracy degradation for significant computation savings. Then, regarding long training times, we introduce our dynamic data pruning framework which takes ideas from active learning and reinforcement learning to dynamically select subsets of data to train the model. Finally, as opposed to pruning data and in the same spirit of reducing training time, we investigate the vision transformer and introduce a unique training method called PatchDrop (originally designed for robustness to occlusions on transformers [1]), which uses the self-supervised DINO [2] model to identify the salient patches in an image and train on the salient subsets of an image. These strategies/training methods take a step in a direction to make models more accessible to deploy on edge devices in an efficient inference context and reduces the barrier for the independent researcher to train deep learning models which would require immense computational resources, pushing towards the democratization of machine learning.

Book Embedded Machine Learning for Cyber Physical  IoT  and Edge Computing

Download or read book Embedded Machine Learning for Cyber Physical IoT and Edge Computing written by Sudeep Pasricha and published by Springer Nature. This book was released on 2023-11-07 with total page 571 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits. Discusses efficient implementation of machine learning in embedded, CPS, IoT, and edge computing; Offers comprehensive coverage of hardware design, software design, and hardware/software co-design and co-optimization; Describes real applications to demonstrate how embedded, CPS, IoT, and edge applications benefit from machine learning.

Book Deep Learning on Microcontrollers

Download or read book Deep Learning on Microcontrollers written by Atul Krishna Gupta and published by BPB Publications. This book was released on 2023-04-15 with total page 346 pages. Available in PDF, EPUB and Kindle. Book excerpt: A step-by-step guide that will teach you how to deploy TinyML on microcontrollers KEY FEATURES ● Deploy machine learning models on edge devices with ease. ● Leverage pre-built AI models and deploy them without writing any code. ● Create smart and efficient IoT solutions with TinyML. DESCRIPTION TinyML, or Tiny Machine Learning, is used to enable machine learning on resource-constrained devices, such as microcontrollers and embedded systems. If you want to leverage these low-cost, low-power but strangely powerful devices, then this book is for you. This book aims to increase accessibility to TinyML applications, particularly for professionals who lack the resources or expertise to develop and deploy them on microcontroller-based boards. The book starts by giving a brief introduction to Artificial Intelligence, including classical methods for solving complex problems. It also familiarizes you with the different ML model development and deployment tools, libraries, and frameworks suitable for embedded devices and microcontrollers. The book will then help you build an Air gesture digit recognition system using the Arduino Nano RP2040 board and an AI project for recognizing keywords using the Syntiant TinyML board. Lastly, the book summarizes the concepts covered and provides a brief introduction to topics such as zero-shot learning, one-shot learning, federated learning, and MLOps. By the end of the book, you will be able to develop and deploy end-to-end Tiny ML solutions with ease. WHAT YOU WILL LEARN ● Learn how to build a Keyword recognition system using the Syntiant TinyML board. ● Learn how to build an air gesture digit recognition system using the Arduino Nano RP2040. ● Learn how to test and deploy models on Edge Impulse and Arduino IDE. ● Get tips to enhance system-level performance. ● Explore different real-world use cases of TinyML across various industries. WHO THIS BOOK IS FOR The book is for IoT developers, System engineers, Software engineers, Hardware engineers, and professionals who are interested in integrating AI into their work. This book is a valuable resource for Engineering undergraduates who are interested in learning about microcontrollers and IoT devices but may not know where to begin. TABLE OF CONTENTS 1. Introduction to AI 2. Traditional ML Lifecycle 3. TinyML Hardware and Software Platforms 4. End-to-End TinyML Deployment Phases 5. Real World Use Cases 6. Practical Experiments with TinyML 7. Advance Implementation with TinyML Board 8. Continuous Improvement 9. Conclusion

Book Hands on TinyML

    Book Details:
  • Author : Rohan Banerjee
  • Publisher : BPB Publications
  • Release : 2023-06-09
  • ISBN : 9355518447
  • Pages : 309 pages

Download or read book Hands on TinyML written by Rohan Banerjee and published by BPB Publications. This book was released on 2023-06-09 with total page 309 pages. Available in PDF, EPUB and Kindle. Book excerpt: Learn how to deploy complex machine learning models on single board computers, mobile phones, and microcontrollers KEY FEATURES ● Gain a comprehensive understanding of TinyML's core concepts. ● Learn how to design your own TinyML applications from the ground up. ● Explore cutting-edge models, hardware, and software platforms for developing TinyML. DESCRIPTION TinyML is an innovative technology that empowers small and resource-constrained edge devices with the capabilities of machine learning. If you're interested in deploying machine learning models directly on microcontrollers, single board computers, or mobile phones without relying on continuous cloud connectivity, this book is an ideal resource for you. The book begins with a refresher on Python, covering essential concepts and popular libraries like NumPy and Pandas. It then delves into the fundamentals of neural networks and explores the practical implementation of deep learning using TensorFlow and Keras. Furthermore, the book provides an in-depth overview of TensorFlow Lite, a specialized framework for optimizing and deploying models on edge devices. It also discusses various model optimization techniques that reduce the model size without compromising performance. As the book progresses, it offers a step-by-step guidance on creating deep learning models for object detection and face recognition specifically tailored for the Raspberry Pi. You will also be introduced to the intricacies of deploying TensorFlow Lite applications on real-world edge devices. Lastly, the book explores the exciting possibilities of using TensorFlow Lite on microcontroller units (MCUs), opening up new opportunities for deploying machine learning models on resource-constrained devices. Overall, this book serves as a valuable resource for anyone interested in harnessing the power of machine learning on edge devices. WHAT YOU WILL LEARN ● Explore different hardware and software platforms for designing TinyML. ● Create a deep learning model for object detection using the MobileNet architecture. ● Optimize large neural network models with the TensorFlow Model Optimization Toolkit. ● Explore the capabilities of TensorFlow Lite on microcontrollers. ● Build a face recognition system on a Raspberry Pi. ● Build a keyword detection system on an Arduino Nano. WHO THIS BOOK IS FOR This book is designed for undergraduate and postgraduate students in the fields of Computer Science, Artificial Intelligence, Electronics, and Electrical Engineering, including MSc and MCA programs. It is also a valuable reference for young professionals who have recently entered the industry and wish to enhance their skills. TABLE OF CONTENTS 1. Introduction to TinyML and its Applications 2. Crash Course on Python and TensorFlow Basics 3. Gearing with Deep Learning 4. Experiencing TensorFlow 5. Model Optimization Using TensorFlow 6. Deploying My First TinyML Application 7. Deep Dive into Application Deployment 8. TensorFlow Lite for Microcontrollers 9. Keyword Spotting on Microcontrollers 10. Conclusion and Further Reading Appendix

Book Embedded Machine Learning for Cyber Physical  IoT  and Edge Computing

Download or read book Embedded Machine Learning for Cyber Physical IoT and Edge Computing written by Sudeep Pasricha and published by Springer Nature. This book was released on 2023-11-01 with total page 418 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits.

Book TinyML

    Book Details:
  • Author : Pete Warden
  • Publisher : O'Reilly Media
  • Release : 2019-12-16
  • ISBN : 1492052019
  • Pages : 504 pages

Download or read book TinyML written by Pete Warden and published by O'Reilly Media. This book was released on 2019-12-16 with total page 504 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep learning networks are getting smaller. Much smaller. The Google Assistant team can detect words with a model just 14 kilobytes in size—small enough to run on a microcontroller. With this practical book you’ll enter the field of TinyML, where deep learning and embedded systems combine to make astounding things possible with tiny devices. Pete Warden and Daniel Situnayake explain how you can train models small enough to fit into any environment. Ideal for software and hardware developers who want to build embedded systems using machine learning, this guide walks you through creating a series of TinyML projects, step-by-step. No machine learning or microcontroller experience is necessary. Build a speech recognizer, a camera that detects people, and a magic wand that responds to gestures Work with Arduino and ultra-low-power microcontrollers Learn the essentials of ML and how to train your own models Train models to understand audio, image, and accelerometer data Explore TensorFlow Lite for Microcontrollers, Google’s toolkit for TinyML Debug applications and provide safeguards for privacy and security Optimize latency, energy usage, and model and binary size

Book Proceedings of Ninth International Congress on Information and Communication Technology

Download or read book Proceedings of Ninth International Congress on Information and Communication Technology written by Xin-She Yang and published by Springer Nature. This book was released on with total page 635 pages. Available in PDF, EPUB and Kindle. Book excerpt: