“We are approaching a time when machines will be capable of outperforming humans at nearly any task,” articulates Prof. Moshe Vardi, a computer science lecturer at Rice University, Texas. Robot learning is broad research field covering both robotics and machine learning. It entails studies of techniques enabling a robot to acquire novel skills in addition to adapting to its surroundings through learning algorithms (connell & Mahadevan, 1993). Autonomous robots must, thus, be capable of learning and maintaining models of their surroundings. Recent study on mobile robot navigation has yielded two key paradigms for mapping indoor surroundings: topological and grid-based archetypes (Thrun, 1997). This paper offers an analysis of a vision-based homing action that offers self-positioning for a mobile robot exploiting ceiling structures as landmarks. This fairly new behavioral concept joins the navigation approach developed for mobile robots, whose inspiration is to characterize the robot spatial knowledge into a topological map; where nodes are made up of self-positioning sites and edges represent any reactive action moving the robot between two physical nodes.
Self-Positioning Robot Navigation
The behavioral approach towards controlling robots is originally inspired by the animal world, where a behavior could be described as an self-sufficient stereotyped action maintained by a given stimulus (Facchinetti, Tièche, & Hügli, 1995). Although through this approach simple tasks can be attained with ease, a common challenge is that distinctive navigation challenges requiring spatial knowledge – map – are intricate to solve, owing to the underlying robot architecture, which is established on the interaction of the surroundings along with the reactive behaviors not mapped within the robot configuration space (Facchinetti, Tièche, & Hügli, 1995).
Proposed by Facchinetti et, al. (1995) a solution to the spatial knowledge challenge is offered by the self-positioning approach proposed in his earlier publications, which exploits a new class of vision-based actions that offer the critical link between control and navigation levels. In the self-positioning methodology, homing actions control the robot through serving its moves towards low-level visual primitives, for instance segments and points extracted from image sequences, which relate to structures and features of the environment. Here in, we describe self-positioning – homing – as the behavior of finding a stable position of the robot pose comparatively to the surronding in terms of visual primitives within an image or set of images (Facchinetti, Tièche, & Hügli, 1995).