We propose a visualization method that takes ocean-satellite-images and fishery data for understanding the sea conditions when catch amount is high. We focus on ocean-satellite-images with date information and fishery data which is a triplet list of (date, catch amount, an ocean-satellite-image). We select the sea of Iwate prefecture in Japan as our target. Our method employs an autoencoder to calculate similarity distances between images. An autoencoder can be considered a pair of an encoder and decoder. An encoder transforms the high-dimensional input data into a low-dimensional feature vector and a decoder recovers the data from the feature vector. We employ a convolutional neural network model, giving our ocean-satellite-images to the input and output on the learning stage. As a result, a feature vector of each image can be calculated. After calculating the autoencoder, images are grouped by the features. First, each image is given one among “Positive”, “Negative” or “None” label based on catch amount. For each positive image, we find ten nearest neighbors in feature vector space. If the number of positive images in the group is greater than or equal to the threshold α ( α = 6 in this paper), we judge the group expresses sea conditions of high catch amount. The number of neighbors and the threshold value α are selected by trial and error. Our result shows that each extracted image group has high similarity and different groups are visually distinctive. We expect that our result is helpful for examining sea conditions in which catch amount will be high.
|