We present a new method of steganalysis, the detection of hidden messages, for least significant bits (LSB) replacement embedding. The method uses lossless image compression algorithms to model images bitplane by bitplane. The basic premise is that messages hidden by replacing LSBs of image pixels do not possess the same statistical properties and are therefore likely to be incompressible by compressors designed for images. In fact, the hidden data are usually compressed files themselves that may or may not be encrypted. In either case, the hidden messages are incompressible. In this work, we study three image compressors, one a standard and two we developed. The results are that many images can be eliminated as having possible steganographic content since the LSBs compress more than a hidden message typically would.
KEYWORDS: Image compression, Data hiding, Image quality, Digital watermarking, Quantization, Digital imaging, Error control coding, Interference (communication), Steganography, Wavelet transforms
In this paper, we present two tamper-detection techniques. The first is a fragile technique that can detect the most minor changes in a marked image using a DCT-based data hiding method to embed a tamper-detection mark. The second is a semi-fragile technique that detects the locations of significant manipulations while disregarding the less important effects of image compression and additive channel noise. Both techniques are fully described and the performance of each algorithms demonstrated by manipulation of the marked images.
KEYWORDS: Switches, Solar concentrators, Switching, Asynchronous transfer mode, Binary data, Demultiplexers, Connectors, Data compression, Very large scale integration, Roads
We propose a new scheme for multicasting in a binary tree that combines packet self-replication and routing in a space- division ATM switch. We revisit Law and Leon-Garcia's approach for packet self-replication and routing. Then we propose a new packet self-replication and routing scheme using only 2b address bits for a b-level binary tree. This method, when applied to a unique 3-dimensional ATM switch architecture, constitutes an optimum combination of packet self-replication and routing for multicasting in a continuously expanding self- routing space-division switch.
Many battlefield applications require the ability to transmit images over narrow bandwidth noisy channels. Previous research has demonstrated that the utilization of predictive trellis-coded quantization (PTCQ) incorporating a nonlinear prediction filter results in a method of robust source coding. Robust source coding provides both compression and noise mitigation without the need to allocate additional bandwidth for channel coding. However, the traditional PTCQ algorithm is suboptimal. This suboptimality arises from the prediction operation; a trellis path is eliminated in favor of the survivor path at each stage in time to form the input to the prediction filter. It is reasonable to assume that this eliminated path may have produced al lower overall distortion than the survivor path. In this paper we address this suboptimality by incorporating a look-ahead stage into PTCQ algorithm. This 'less-greedy' approach allows coding gains with a slight increase in overhead. The resulting algorithm yields an image encoding technique, which enables resilient image transmission over tactical channels.
KEYWORDS: Switches, Solar concentrators, Switching, Binary data, Asynchronous transfer mode, Demultiplexers, Mathematical modeling, Very large scale integration, Adaptive optics, Plutonium
We propose a new design for a self-routing, space-division fast packet switch for ATM B-ISDN. This is an expansion switch based on binary expansion, concentration, and combination of neighboring blocks of packets. Internal buffers are needed for local synchronization and packet buffering for eventualities of path collision and/or next stage buffer full. An expansion network such as the UDEL switch provides multiple paths for any input/output pair. These multiple paths help to alleviate many common problems including head-of-line (HOL), internal path conflicts, and output collisions. Our proposal provides 10-10 packet drop rate between two stages under random uniform traffic with a unique 3-dimensional arrangement of printed- circuit boards. Batcher-banyans or similar small switches may be used as the last stage.
We introduce an adaptive variant of the LUM smoother. The smoother operates on a sliding window and is designed to eliminate impulsive components with minimal distortion. In any particular window, the amount of filtering is adjusted based upon the quasi range measures of dispersion. As the results of simulations indicate, in most cases, the adaptive LUM smoother outperforms its fixed counterpart. Secondly, we generalize the two-stage LUM smoother to a multilevel order statistic filter. The generalization leads to the development of some useful filters: multiple window order statistic filters and asymmetric order statistic filters. We provide a detailed analytical and quantitative analysis of the proposed filters.
This paper considers the problem of prefiltering images to enhance edge detection. In particular, several order statistics based sharpeners are considered with a traditional linear technique, unsharp masking. The order statistics sharpeners include the LUM sharpener, the CS-filter, and the GOS filter.
KEYWORDS: Digital filtering, Nonlinear filtering, Filtering (signal processing), Linear filtering, Image filtering, Electronic filtering, Smoothing, Statistical analysis, Nonlinear image processing, Signal to noise ratio
We introduce the LUM filter for both smoothing and sharpening. The LUM filter is a moving window estimator that does the following: it finds the order statistics by sorting the samples in the window, and it compares a lower order statistic, an upper order statistic, and the middle sample. The two order statistics define a range of 'normal' values. If smoothing is desired, the LUM filter outputs the middle sample if it is between the two order statistics; otherwise, it outputs the closest of the two order statistics. If sharpening is desired, the roles are reversed. The LUM sharpener outputs the middle sample if it is outside the two order statistics; otherwise it outputs the closest of the two order statistics. Furthermore, both characteristics can be achieved at the same time. We compare the LUM filter against common alternatives such as linear smoothers and sharpeners, moving medians, and sharpeners such as the CS filter. In summary, we believe the LUM filter is widely applicable and has good performance in a wide range of applications.
This paper discusses two examples of the design and use of order statistic filters in image compression. The first example is a pre-filter. The pre-filter is designed to gently smooth the image to promote better compression. However, this filter must not blur edges significantly. Various order statistic estimators, such as a simple median, are ideal at this. In the examples given, we are able to achieve between 10 and 20% fewer bits while actually improving image quality. The second example is as a postfilter. The post-filter, used at low bitrates when image degradation is present, is designed to "average out" contouring effects while not averaging across edges.
This paper considers an image filter to remove small features of low contrast based on a simple model of a quantum limited detector. That is, it removes image noise that can't be seen or that, in other circumstances, can't represent real information. The filtering scheme asks how large an area can be covered with one color without introducing visible departures from the original image. We use a quad tree structure to examine progressively larger image areas until we reach a point that setting the area to one color would obscure visible image features. We have applied these algorithms to a number of grey scale images, ranging from finely detailed images of high contrast to simple classroom video scenes without much fine detail. We have seen reductions by factors from four to twelve in the number of leaf nodes in the quad tree representation of the filtered images relative to the original images. We have also experimented with the filtering of difference images from the classroom video sequence which was made with a stationary camera and have seen substantial further reductions in quad tree complexity for the difference images by factors of two to four.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.