The motivation for data fusion is to reduce the limitations and uncertainties associated with data coming from a single sensor only. In the context of remotely sensed data the fusion is often performed by combining high spatial with high spectral resolution imagery at different levels. In contrast to pixel-based approaches like the IHS-transformation, in this paper we will focus on a fusion of data at the feature level.
In high spatial resolution data the geometry of urban objects can be determined very accurately. But high spatial resolution data often contains low spectral information such as, for example, a three band RGB image. Thus, similar feature values for thematic classes like water, dark pavements or dark rooftops lead to classification errors.
If hyperspectral data is used to classify urban materials, the definition of endmembers representing those materials is needed. The problem is that endmembers representing urban surface types are often the result of a mixture of spectral pure materials which leads to flat spectra. Consequently, those thematic endmembers can hardly be detected by standard algorithms like the Pixel Purity Index (PPI) so that standard classification procedures fail.
In order to improve the classification process, our approach fuses hyperspectral data recorded by the HyMap sensor with high spatial resolution imagery (digital orthophotos) for a combined endmember selection, classification, and structural analysis.
The endmember for the thematic classes will be determined in a semi-automatic process. After a segmentation of the high spatial resolution dataset the resulting segments will be used to detect those pixels in the hyperspectral data sets, which represent candidates for the definition of thematic endmembers. The endmembers are stored in a spectral library and are used for the classification of hyperspectral data.
The segments in the high spatial resolution data will be processed based upon the classification of the hyperspectral dataset and the application of overlay rules.
|