KEYWORDS: Data hiding, Binary data, Image compression, Computer programming, Signal to noise ratio, Steganography, Data modeling, Image transmission, Interference (communication), Digital imaging
Information hiding can be performed under the guise of a digital image. We consider the following scenario: Alice and Bob share an image and would like to use it as a cover image to communicate a message m. We are interested in answering two questions: What is the maximum amount of information that can be sent for a given level of degradation to an image? and How can this level of efficiency be achieved in practice? We require the recovered message to be the same as the embedded one.
Our model begins with Alice compressing a message to obtain a binary sequence with uniform distribution. She then converts the binary sequence into a Q-ary sequence having a pre-defined distribution, and finally adding each symbol to a pixel. The distribution of the Q-ary sequence is chosen such that the amount of information is maximized for a given value of the signal to noise ratio. Bob recovers the sequence by subtracting the image data, and then converting the Q-ary string into the original binary string.
We determine the optimal distribution analytically and provide a graphical representation of the variation of the amount of information with signal-to-noise ratio when the size of the alphabet, Q, varies.
The indices obtained by tree-structured vector quantization (TSVQ) have an interesting property that enables them to give information about the correlation between two image blocks. If two image blocks are highly correlated, they may have an identical index, or the same ancestors. The existence of high inter-block correlation in natural images results in having neighboring blocks with the same genealogy. This characteristic can be used to compress the indices. This paper introduces a novel method to exploit the genealogical relation between the image block indices obtained from a TSVQ. The performance of this scheme in terms of PSNR versus average rate was compared with some other similar image coders. The results show that this scheme has better compression capability in terms of objective and subjective quality over these schemes at bit rates less than 0.3 bpp.
This paper introduces a new scheme for still image compression based on Vector Quantization (VQ). The new scheme first vector quantized the image, then the indices obtained from quantization are compressed and transmitted. The indices are used as a classifier to identify the active areas of the image. The residual of active areas are vector quantized in the second step and the indices generated are transmitted. The advantage of new scheme is to present the active areas of the coded image accurately without overhead requirement. This scheme shows better subjective and objective in comparison with similar VQ schemes.
A new method of texture classification comprising two processing stages, namely a low-level evolutionary feature extraction based on Gabor wavelets and a high-level neural network based pattern recognition, is proposed. The design of these stages is motivated by the processes involved in the human visual system: low-level receptors responsible for early vision processing and the high-level cognition. Gabor wavelets are used as extractors of "lowlevel" features that feed the feature-adaptive adaptive resonance theory (ART) neural network acting as a high-level "cognitive system." The novelty of the model developed in this paper lies in the use of a self-organizing input layer to the fuzzy ART. Evaluation of the model is performed by using natural textures, and results obtained show that the developed model is capable of performing the texture recognition task effectively. Applications of the developed model include the study of artificial vision systems motivated by the human visual system model.
Receptive field profiles of simple cells in the visual cortex have been shown to resemble even- symmetric or odd-symmetric Gabor filters. Computational models employed in the analysis of textures have been motivated by two-dimensional Gabor functions arranged in a multi-channel architecture. More recently wavelets have emerged as a powerful tool for non-stationary signal analysis capable of encoding scale-space information efficiently. A multi-resolution implementation in the form of a dyadic decomposition of the signal of interest has been popularized by many researchers. In this paper, Gabor wavelet configured in a 'rosette' fashion is used as a multi-channel filter-bank feature extractor for texture classification. The 'rosette' spans 360 degrees of orientation and covers frequencies from dc. In the proposed algorithm, the texture images are decomposed by the Gabor wavelet configuration and the feature vectors corresponding to the mean of the outputs of the multi-channel filters extracted. A minimum distance classifier is used in the classification procedure. As a comparison the Gabor filter has been used to classify the same texture images from the Brodatz album and the results indicate the superior discriminatory characteristics of the Gabor wavelet. With the test images used it can be concluded that the Gabor wavelet model is a better approximation of the cortical cell receptive field profiles.
It is difficult to achieve a good low bit rate image compression performance with traditional block coding schemes such as transform coding and vector quantization, without regard for the human visual perception or signal dependency. These classical block coding schemes are based on minimizing the MSE at a certain rate. This procedure results in more bits being allocated to areas which may not be visually important and the resulting quantization noise manifests as a blocking artifact. Blocking artifacts are known to be psychologically more annoying than white noise when the human visual response is considered. While image adaptive vector quantization (IAVQ) attempts to address this problem for traditional vector quantization (VQ) schemes by exploiting image dependency, it ignores the human visual perception when allocating bits. This paper addresses this problem through a new IAVQ scheme based on the human visual perception. In this method, the input image is partitioned into visual classes and each class, depending on its visual importance, is adaptively or universally encoded. The objective and subjective quality of this scheme has been compared with JPEG and a previously proposed image adaptive VQ scheme. The new scheme subjectively outperforms both schemes at low bit rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.