EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Machine Learning based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment

Download or read book Machine Learning based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment written by Xiaochun Wang and published by Springer. This book was released on 2019-08-12 with total page 328 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book advances research on mobile robot localization in unknown environments by focusing on machine-learning-based natural scene recognition. The respective chapters highlight the latest developments in vision-based machine perception and machine learning research for localization applications, and cover such topics as: image-segmentation-based visual perceptual grouping for the efficient identification of objects composing unknown environments; classification-based rapid object recognition for the semantic analysis of natural scenes in unknown environments; the present understanding of the Prefrontal Cortex working memory mechanism and its biological processes for human-like localization; and the application of this present understanding to improve mobile robot localization. The book also features a perspective on bridging the gap between feature representations and decision-making using reinforcement learning, laying the groundwork for future advances in mobile robot navigation research.

Book Vision Based Autonomous Robot Navigation

Download or read book Vision Based Autonomous Robot Navigation written by Amitava Chatterjee and published by Springer. This book was released on 2012-10-13 with total page 235 pages. Available in PDF, EPUB and Kindle. Book excerpt: This monograph is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book describes successful implementation of integration of low-cost, external peripherals, with off-the-shelf procured robots. An important highlight of the book is that it presents a detailed, step-by-step sample demonstration of how vision-based navigation modules can be actually implemented in real life, under 32-bit Windows environment. The book also discusses the concept of implementing vision based SLAM employing a two camera based system.

Book Mobile Robot Navigation Using a Vision Based Approach

Download or read book Mobile Robot Navigation Using a Vision Based Approach written by Mehmet Serdar Güzel and published by . This book was released on 2012 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: This study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework.

Book Robot Navigation from Nature

Download or read book Robot Navigation from Nature written by Michael John Milford and published by Springer Science & Business Media. This book was released on 2008-02-11 with total page 203 pages. Available in PDF, EPUB and Kindle. Book excerpt: This pioneering book describes the development of a robot mapping and navigation system inspired by models of the neural mechanisms underlying spatial navigation in the rodent hippocampus. Computational models of animal navigation systems have traditionally had limited performance when implemented on robots. This is the first research to test existing models of rodent spatial mapping and navigation on robots in large, challenging, real world environments.

Book Learning and Vision Algorithms for Robot Navigation

Download or read book Learning and Vision Algorithms for Robot Navigation written by Margrit Betke and published by . This book was released on 1995 with total page 128 pages. Available in PDF, EPUB and Kindle. Book excerpt: Abstract: "This thesis studies problems that a mobile robot encounters while it is navigating through its environment. The robot either explores an unknown environment or navigates through a somewhat familiar environment. The thesis addresses the design of algorithms for 1. environment learning, 2. position estimation using landmarks, 3. visual landmark recognition. In the area of mobile robot environment learning, we introduce the problem of piecemeal learning of an unknown environment: the robot must return to its starting point after each piece of exploration. We give linear time algorithms for exploring environments modeled as grid- graphs with rectangular obstacles. Our best algorithm for piecemeal learning of arbitrary undirected graphs runs in almost linear time. It is crucial for a mobile robot to be able to localize itself in its environment. We describe a linear time algorithm for localizing the mobile robot in an environment with landmarks. The robot can identify these landmarks and measure their bearings. Given such noisy meaurements, the algorithm estimates the robot's position and orientation with respect to the map of the environment. The algorithm makes efficient use of our representation of the landmarks by complex numbers. The thesis also addresses the problem of how landmarks in the robot's surroundings can be recognized visually. We introduce an efficient, model-based recognition algorithm that exploits a fast version of simulated annealing. To avoid false recognition, we propose a method to select model images by measuring the information content of the images. The performance of the algorithm is demonstrated with real-world images of traffic signs."

Book Principles of Robot Motion

Download or read book Principles of Robot Motion written by Howie Choset and published by MIT Press. This book was released on 2005-05-20 with total page 642 pages. Available in PDF, EPUB and Kindle. Book excerpt: A text that makes the mathematical underpinnings of robot motion accessible and relates low-level details of implementation to high-level algorithmic concepts. Robot motion planning has become a major focus of robotics. Research findings can be applied not only to robotics but to planning routes on circuit boards, directing digital actors in computer graphics, robot-assisted surgery and medicine, and in novel areas such as drug design and protein folding. This text reflects the great advances that have taken place in the last ten years, including sensor-based planning, probabalistic planning, localization and mapping, and motion planning for dynamic and nonholonomic systems. Its presentation makes the mathematical underpinnings of robot motion accessible to students of computer science and engineering, rleating low-level implementation details to high-level algorithmic concepts.

Book Vision based Robot Localization Using Artificial and Natural Landmarks

Download or read book Vision based Robot Localization Using Artificial and Natural Landmarks written by and published by . This book was released on 2004 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: In mobile robot applications, it is an important issue for a robot to know where it is. Accurate localization becomes crucial for navigation and map building applications because both route to follow and positions of the objects to be inserted into the map highly depend on the position of the robot in the environment. For localization, the robot uses the measurements that it takes by various devices such as laser rangefinders, sonars, odometry devices and vision. Generally these devices give the distances of the objects in the environment to the robot and proceesing these distance information, the robot finds its location in the environment. In this thesis, two vision-based robot localization algorithms are implemented. The first algorithm uses artificial landmarks as the objects around the robot and by measuring the positions of these landmarks with respect to the camera system, the robot locates itself in the environment. Locations of these landmarks are known. The second algorithm instead of using artificial landmarks, estimates its location by measuring the positions of the objects that naturally exist in the environment. These objects are treated as natural landmarks and locations of these landmarks are not known initially. A three-wheeled robot base on which a stereo camera system is mounted is used as the mobile robot unit. Processing and control tasks of the system is performed by a stationary PC. Experiments are performed on this robot system. The stereo camera system is the measurement device for this robot.

Book Vision based Navigation for Mobile Robots on Ill structured Roads

Download or read book Vision based Navigation for Mobile Robots on Ill structured Roads written by Hyun Nam Lee and published by . This book was released on 2010 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Autonomous robots can replace humans to explore hostile areas, such as Mars and other inhospitable regions. A fundamental task for the autonomous robot is navigation. Due to the inherent difficulties in understanding natural objects and changing environments, navigation for unstructured environments, such as natural environments, has largely unsolved problems. However, navigation for ill-structured environments [1], where roads do not disappear completely, increases the understanding of these difficulties. We develop algorithms for robot navigation on ill-structured roads with monocular vision based on two elements: the appearance information and the geometric information. The fundamental problem of the appearance information-based navigation is road presentation. We propose a new type of road description, a vision vector space (V2-Space), which is a set of local collision-free directions in image space. We report how the V2-Space is constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic, and time-delay constraints in motion planning. Failures occur due to the limitations of the appearance information-based navigation, such as a lack of geometric information. We expand the research to include consideration of geometric information. We present the vision-based navigation system using the geometric information. To compute depth with monocular vision, we use images obtained from different camera perspectives during robot navigation. For any given image pair, the depth error in regions close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed on the road plane and predict them accordingly before the robot makes its move. We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal locations to take frames while navigating. Experiments show that the algorithm can significantly reduce the depth error and hence reduce the risk of collisions. Although this approach is developed for monocular vision, it can be applied to multiple cameras to control the depth error. The concept of an untrusted area can be applied to 3D reconstruction with a two-view approach.

Book Visual Navigation for Robots in Urban and Indoor Environments

Download or read book Visual Navigation for Robots in Urban and Indoor Environments written by Yan Lu and published by . This book was released on 2015 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: As a fundamental capability for mobile robots, navigation involves multiple tasks including localization, mapping, motion planning, and obstacle avoidance. In unknown environments, a robot has to construct a map of the environment while simultaneously keeping track of its own location within the map. This is known as simultaneous localization and mapping (SLAM). For urban and indoor environments, SLAM is especially important since GPS signals are often unavailable. Visual SLAM uses cameras as the primary sensor and is a highly attractive but challenging research topic. The major challenge lies in the robustness to lighting variation and uneven feature distribution. Another challenge is to build semantic maps composed of high-level landmarks. To meet these challenges, we investigate feature fusion approaches for visual SLAM. The basic rationale is that since urban and indoor environments contain various feature types such points and lines, in combination these features should improve the robustness, and meanwhile, high-level landmarks can be defined as or derived from these combinations. We design a novel data structure, multilayer feature graph (MFG), to organize five types of features and their inner geometric relationships. Building upon a two view-based MFG prototype, we extend the application of MFG to image sequence-based mapping by using EKF. We model and analyze how errors are generated and propagated through the construction of a two view-based MFG. This enables us to treat each MFG as an observation in the EKF update step. We apply the MFG-EKF method to a building exterior mapping task and demonstrate its efficacy. Two view based MFG requires sufficient baseline to be successfully constructed, which is not always feasible. Therefore, we further devise a multiple view based algorithm to construct MFG as a global map. Our proposed algorithm takes a video stream as input, initializes and iteratively updates MFG based on extracted key frames; it also refines robot localization and MFG landmarks using local bundle adjustment. We show the advantage of our method by comparing it with state-of-the-art methods on multiple indoor and outdoor datasets. To avoid the scale ambiguity in monocular vision, we investigate the application of RGB-D for SLAM.We propose an algorithm by fusing point and line features. We extract 3D points and lines from RGB-D data, analyze their measurement uncertainties, and compute camera motion using maximum likelihood estimation. We validate our method using both uncertainty analysis and physical experiments, where it outperforms the counterparts under both constant and varying lighting conditions. Besides visual SLAM, we also study specular object avoidance, which is a great challenge for range sensors. We propose a vision-based algorithm to detect planar mirrors. We derive geometric constraints for corresponding real-virtual features across images and employ RANSAC to develop a robust detection algorithm. Our algorithm achieves a detection accuracy of 91.0%. The electronic version of this dissertation is accessible from http://hdl.handle.net/1969.1/155525

Book Vision Based Mobile Robotics  mobile robot localization using vision sensors and active probabilistic approaches

Download or read book Vision Based Mobile Robotics mobile robot localization using vision sensors and active probabilistic approaches written by Emanuele Frontoni and published by Lulu.com. This book was released on 2012-01-22 with total page 157 pages. Available in PDF, EPUB and Kindle. Book excerpt: The use of vision in mobile robotics in one of the main goal of this thesis. In particular novel appearance based approaches for image matching metric are introduced. These approaches are applied to the problem of mobile robot localization.Similarity measures between robot's views are used in probabilistic methods for robot pose estimation. In this field of probabilistic localization active approach are proposed allowing the robot to faster and better localize. All methods have been extensively tested using a real robot in an indoor environment.Note: the book is the publication of the PhD thesis discussed in Università Politecnica delle Marche, Ancona, Italy in 2006 by Emanuele Frontoni

Book Mobile Robots Navigation

Download or read book Mobile Robots Navigation written by Alejandra Barrera and published by BoD – Books on Demand. This book was released on 2010-03-01 with total page 684 pages. Available in PDF, EPUB and Kindle. Book excerpt: Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described.

Book Advances in Plan Based Control of Robotic Agents

Download or read book Advances in Plan Based Control of Robotic Agents written by Michael Beetz and published by Springer Science & Business Media. This book was released on 2002-11-01 with total page 299 pages. Available in PDF, EPUB and Kindle. Book excerpt: In recent years, autonomous robots, including Xavier, Martha [1], Rhino [2,3], Minerva,and Remote Agent, have shown impressive performance in long-term demonstrations. In NASA’s Deep Space program, for example, an - tonomous spacecraft controller, called the Remote Agent [5], has autonomously performed a scienti?c experiment in space. At Carnegie Mellon University, Xavier [6], another autonomous mobile robot, navigated through an o?ce - vironment for more than a year, allowing people to issue navigation commands and monitor their execution via the Internet. In 1998, Minerva [7] acted for 13 days as a museum tourguide in the Smithsonian Museum, and led several thousand people through an exhibition. These autonomous robots have in common that they rely on plan-based c- trol in order to achieve better problem-solving competence. In the plan-based approach, robots generate control actions by maintaining and executing a plan that is e?ective and has a high expected utility with respect to the robots’ c- rent goals and beliefs. Plans are robot control programs that a robot can not only execute but also reason about and manipulate [4]. Thus, a plan-based c- troller is able to manage and adapt the robot’s intended course of action — the plan — while executing it and can thereby better achieve complex and changing tasks.

Book Proceedings of the National Conference on Advanced Manufacturing   Robotics  January 10 11  2004

Download or read book Proceedings of the National Conference on Advanced Manufacturing Robotics January 10 11 2004 written by S. N. Shome and published by Allied Publishers. This book was released on 2004 with total page 594 pages. Available in PDF, EPUB and Kindle. Book excerpt: Contributed papers presented at the conference held at Central Mechanical Engineering Research Institute, Durgapur.

Book AttentiRobot  A Visual Attention based Landmark Selection Approach for Mobile Robot Navigation

Download or read book AttentiRobot A Visual Attention based Landmark Selection Approach for Mobile Robot Navigation written by and published by . This book was released on with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Visual attention refers to the ability of a vision system to rapidly detect visually salient locations in a given scene. On the other hand, the selection of robust visual landmarks of an environment represents a cornerstone of reliable vision-based robot navigation systems. Indeed, can salient scene locations provided by visual attention be useful for robot navigation? This work investigates the potential and effectiveness of the visual attention mechanism to provide pre-attentive scene information to a robot navigation system. The basic idea is to detect and track the salient locations, or spots of attention by building trajectories that memorize the spatial and temporal evolution of these spots. Then, a persistency test, which is based on the examination of the lengths of built trajectories, allows the selection of good environment landmarks. The selected landmarks can be used for feature-based localization and mapping systems which helps mobile robot to accomplish navigation tasks.

Book Directed Sonar Sensing for Mobile Robot Navigation

Download or read book Directed Sonar Sensing for Mobile Robot Navigation written by John J. Leonard and published by Springer Science & Business Media. This book was released on 2012-12-06 with total page 199 pages. Available in PDF, EPUB and Kindle. Book excerpt: This monograph is a revised version of the D.Phil. thesis of the first author, submitted in October 1990 to the University of Oxford. This work investigates the problem of mobile robot navigation using sonar. We view model-based navigation as a process of tracking naturally occurring environment features, which we refer to as "targets". Targets that have been predicted from the environment map are tracked to provide that are observed, but not predicted, vehicle position estimates. Targets represent unknown environment features or obstacles, and cause new tracks to be initiated, classified, and ultimately integrated into the map. Chapter 1 presents a brief definition of the problem and a discussion of the basic research issues involved. No attempt is made to survey ex haustively the mobile robot navigation literature-the reader is strongly encouraged to consult other sources. The recent collection edited by Cox and Wilfong [34] is an excellent starting point, as it contains many of the standard works of the field. Also, we assume familiarity with the Kalman filter. There are many well-known texts on the subject; our notation derives from Bar-Shalom and Fortmann [7]. Chapter 2 provides a detailed sonar sensor model. A good sensor model of our approach to navigation, and is used both for is a crucial component predicting expected observations and classifying unexpected observations.