This study focuses on pixel-wise semantic segmentation of crop production regions by using satellite remote sensing multispectral imagery. One of the principal aims of the study is to find out whether the raw multiple channel inputs are more effective in the training process of the semantic segmentation models or if the formularized counterparts as the spectral indices are more effective. For this purpose, the vegetation indices NDVI, ARVI and SAVI and the water indices NDWI, NDMI, and WRI are employed as inputs. Additionally, using 8, 10 and 16 channels, multiple channel inputs are utilized. Moreover, all spectral indices are taken as separate channels to form a multiple channel input. We conduct deep learning experiments using two semantic segmentation architectures, namely U-Net and DeepLabV3+. Our results show that, in general, feeding raw multiple channel inputs to semantic segmentation models performs much better than feeding the spectral indices. Hence, regarding crop production region segmentation, deep learning models are capable of encoding multispectral information. When the spectral indices are compared among themselves, ARVI, which reduces the atmospheric scattering effects, achieves better accuracy for both architectures. The results also reveal that spatial resolution of multispectral data has a significant effect on the semantic segmentation performance, and therefore the RGB band, which has the lowest ground sample distance (0.31 m) outperforms multispectral bands and shortwave infrared bands.
Hyper-spectral satellite imagery, consisting of multiple visible or infrared bands, is extremely dense and weighty for deep operations. Regarding problems related to vegetation as, more specifically, tree segmentation, it is difficult to train deep architectures due to lack of large-scale satellite imagery. In this paper, we compare the success of different single channel indices, which are constructed from multiple bands, for the purpose of tree segmentation in a deep convolutional neural network (CNN) architecture. The utilized indices are either hand-crafted such as excess green index (ExG) and normalized difference vegetation index (NDVI) or reconstructed from the visible bands using feature space transformation methods such as principle component analysis (PCA). For comparison, these features are fed to an identical CNN architecture, which is a standard U-Net-based symmetric encoder-decoder design with hierarchical skip connections and the segmentation success for each single index is recorded. Experimental results show that single bands, which are constructed from the vegetation indices and space transformations, can achieve similar segmentation performances as compared to that of the original multi-channel case.
A hyperspectral image compression method is proposed using an online dictionary learning approach. The online learning mechanism is aimed at utilizing least number of dictionary elements for each hyperspectral image under consideration. In order to meet this “sparsity constraint”, basis pursuit algorithm is used. Hyperspectral imagery from AVIRIS datasets are used for testing purposes. Effects of non-zero dictionary elements on the compression performance are analyzed. Results indicate that, the proposed online dictionary learning algorithm may be utilized for higher data rates, as it performs better in terms of PSNR values, as compared with the state-of-the-art predictive lossy compression schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.