In this paper we propose a specialized hardware architecture for the real time visual navigation of a mobile robot. The adopted navigation method is based on a two-steps approach. Features are extracted and matched over an image sequence which is captured by a video-camera (mounted on a mobile robot) during its motion. As a result, a 2D motion field is recovered and used to extract ego-motion parameters. Our hardware implements the first step of the method, which consists of feature extraction and raw match computation by means of radiometric similarity computation. Real time performances are allowed since a 40 MHz processing rate is achieved.
|Titolo:||Specialized Hardware for Real Time Navigation|
|Data di pubblicazione:||2001|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1006/rtim.1999.0220|
|Appare nelle tipologie:||1.1 Articolo in rivista|