The growing use of Geographic Information Systems (GIS) has led to new research opportunities in the application of satellite imagery to urban analysis. The information content of such images is a function of the combined influence of the radiometric, spatial, and spectral resolution of the sensor. The different bands of satellite sensors are recorded synchronously so that their pixels may be precisely matched and compared with their counterpart pixels in other bands. This means that we can use spectral ("colour") differences to identify urban features to the extent that colours are diagnostic sort of a coarse spectroscopy from space. Spectral sensor technology, however, coupled with the complexity of ground features in urban areas, can make visual interpretations of satellite imagery both labour intensive and uncertain. Moreover, the informational utility of a multispectral image for land cover classification is often limited by the spectral and spatial resolution of the imaging system. No currently existing single system offers both high spatial and high spectral resolution. Furthermore, if the same techniques that were developed for earlier lower resolution satellite imagery are used on high-resolution imagery (such as maximum likelihood classification), the results can create a negative impact. Lower resolution data are not affected greatly by artefacts, such as shadows, and they also “smooth” out variations across ranges of individual pixels, allowing statistical processing to create effective land cover maps. Individual pixels in higher resolution data can represent individual objects and contiguous pixels in an image can vary dramatically, creating very mixed or “confused” classification results. This paper proposes a two-stage classification procedure that effectively reduces negative impacts related to spectral ambiguity and spatial complexity of land cover classes of high resolution imagery in urban environments. In order to achieve both high spatial and spectral resolution in a single image, image fusion is employed and its influences on thematic accuracy of land cover classification, through an example using IKONOS panchromatic and multispectral images, is examined. Three fused images were generated using intensity-hue-saturation (IHS), principal component analysis (PCA) and high pass filter (HPF) fusion methods. All the images were then classified under the supervised classification approaches of maximum likelihood classifier (MLC). Using the classified result of the original multispectral image as a benchmark, integrative analysis of overall accuracy, with the degree of improvement in the classification from using the fused images, are executed. Validity and limitations of image fusion for land cover classification are finally drawn.
|Titolo:||Satellite Imagery Fusion Methods to improve Urban Land Cover Classes Identification|
|Data di pubblicazione:||2003|
|Appare nelle tipologie:||1.1 Articolo in rivista|