PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper discusses an iterative computer method that can be used to solve a number of problems in optics. This method can be applied to two types of problems: (1) synthesis of a Fourier transform pair having desirable properties in both domains, and (2) reconstruction of an object when only partial information is available in any one domain. Illustrating the first type of problem, the method is applied to spectrum shaping for computer-generated holograms to reduce quantization noise. A problem of the second type is the reconstruction of astronomical objects from stellar speckle interferometer data. The solution of the latter problem will allow a great increase in resolution over what is ordinarily obtainable through a large telescope limited by atmospheric turbulence. Experimental results are shown. Other applications are mentioned briefly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image interpreters often express the desire to extract a "maximum of information" from a given picture. We have devised a new norm of restoration that, in fact, realizes this aim. The image data are forced to contain a maximum of information about the object, through variation of the object estimate. This maximum information (MI) norm restores the ideal object which, had it existed, would have maximized the throughput of information from object to image planes. Or, the object estimate achieves the "channel capacity" of the image-forming medium. The following simple model for image formation is used. The imaging system is regarded as a transducer of photon position, from x in the object plane to y in the image plane. Then the conditional probability p(y|x) is just s(y-x), the PSF for the imagery, plus an unknown noise probability law n(y) independent of x (signal) for those transitions to y that are due to noise. The average information per photon transition x → y may then be calculated, using the correspondence of probability law p(x) with the object and p(y) with the image. When the image law p(y) is constrained to equal the data, the only set of unknowns remaining is the object, which may be varied to maximize the information. Restorations by this method are compared with corresponding ones by maximum entropy and show some advantage over the latter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital image matching permits to analyze aerial photographs for subtle changes that are not visible to the unaided eye. These changes can be portrayed in pictorial form - a "change image" - which provides a cost effective early indicator of impending environmental problems. The digital image matching problems encountered in low altitude aerial photographs are studied here, and examples are shown of this method applied to enviromental assessement studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase retrieval implies extraction of a wavefront θ(f) at one spatial plane based on the intensity p(x) in a conjugate plane. For example, θ(f) might be the phase distortion at the entrance pupil of an imaging system when a distant point source is imaged through a turbulent atmosphere; p(x) is the real, non-negative point spread function measured in the image plane. In this paper we describe the mathematics of the technique and show computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear regression is a powerful procedure with wide areas of application. We show in this paper that a very fruitful area of application is in the area of dimensionality reduction in pattern recognition. The dimensionality reduction is accomplished by preselecting cluster centers in the range and using regression techniques to derive the transformation. Experimental results are presented that compare this procedure to the Karhunen-Loeve procedure for several data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video Image Enhancement through adaptive noise filtering and edge sharpening is presented. The basic concepts behind this technique are the fact that with some kind of image segmentation, noise filtering can be performed in the nearly uniform region and edge sharpening only near an edge. The resulting algorithm is nonlinear and adaptive. It adapts globally to the input SNR and locally to the gradient magnitude. Implementation is quite simple. Performance is nonlinear and depends on the SNR of the original image. Effective video signal-to-noise ratio can be improved with minimal observable contouring effect, degradation in spatial resolution, and other artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image aliasing (undersampling) can cause significait errors in attempts to align two small size imagus to subpixel accuracy. This problem arises in applications such as focal plane stabilization, target detection, angular velocity updates to inertial navigation, and image resolution improvement, in which a small number of detectors is preferable from a cost standpoint. This paper compares the sensitivity of four registration algorithms to a sequence of increasingly aliased images, ranging in size from 8 x 8 to 32 x 32 pixels. The algorithms are: minimum sum of differences on interpolated images (MSD), normalized cross-correlation (NCC), phase correlation (PC), and normalized mean absolute difference (NMAD). The results show that the MSD and NCC methods are least sensitive to aliasing. Attempts to make the 1188 and PC methods more robust against aliasing are also discussed. The main conclusion is that aliasing should be considered as an effect when choosing the system modulation transfer function and the number of detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of target identification and track assembly from successive image frames from a satellite based infrared mosaic detection is considered. The wide variety of digital and electronic algorithms for bulk filtering, target identification, and track assembly are described. Optical pattern recognition techniques are also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique for converting fan-beam projections to parallel-beam projections for use in computed tomography is presented. The problem is approached by use of a rubber sheet transformation. Since the data is discretized, an interpolation step is necessary. For densely sampled data this approach appears satisfactory and a significant reduction in photon noise is observable in computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model for photon resolved low light level image signals detected by a counting array is developed. Those signals are impaired by signal dependent Poisson noise and linear blurring. An optimal restoration filter based on maximizing the a posteriori probability density (MAP) is developed. A suboptimal overlap-save sectioning method using a Newton-Raphson iterative procedure is used for the solution of the high dimensionality nonlinear estimation equations for any type of space-variant and invariant linear blur. An accurate image model with a nonstationary mean and stationary variance is used to provide a priori information for the MAP restoration filter. Finally, a comparison between the MAP filter and a linear space-invariant minimum mean-square error (LMMSE) filter is made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for the automatic detection of defects, such as cracks and cavities in radiographs of artillery shells have been previously described. An array processor mechanization of these algorithms is now described that allows a 200 x 300 pixel array to be analyzed in less than 10 seconds. The algorithms were restructured to take advantage of the vector orientation of the array processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system has been designed to filter a television image in real-time or near real-time for the purpose of enhancing high spatial frequencies and attenuating low spatial frequencies. The system makes innovative use of a two-dimensional linear filter, coded memories, and high speed digital multipliers to provide powerful linear and homomorphic image filtration. The architecture is useful in correcting and compensating linear and multiplicative shading, a common artifact in television images. The filter can also be used to extract the very low spatial frequencies generally associated with illumination. Real time filtration helps overcome the dynamic range limitations of most television systems by redistributing the image power spectrum for optimum viewing of edge information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the recent past considerable attention has been devoted to the application of Kalman filtering to smoothing out observation noise in image data. Optimal two-dimensional Kalman filtering algorithms require large amounts of storage and computation. Thus, the study of suboptimum estimators that require less computation is of importance. A comparison of some suboptimum image filters against the optimum non-recursive interpolator is accomplished. A new semi-causal (hybrid) filter is proposed that compensates the suboptimality of a simple two-dimensional recursive filter by means of an optimal combination of its estimate and a few non-causal observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frequently an image may be blurred by a point spread function whose details are not known exactly. In such a case it is necessary to estimate the point spread function before deconvolving the blurred image. This paper presents a new technique for estimating a zero phase blurring function when its optical transfer function is smooth. The estimate is obtained by smoothing the spectral magnitude of the image and comparing it to an average magnitude that is also smoothed. The average magnitude is obtained by averaging over an ensemble of similar images. The estimation can be extended to degradations such as a defocused lens by thresholding the estimated magnitude to obtain zero crossings and adjusting the phase accordingly. In particular, this technique can be applied to a circularly symmetric Gaussian or a defocused lens with a circular aperture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of changes between two images is of interest in a wide range of applications. An important example is side-looking Synthetic Aperture Radarl (SAR) imagery taken at different times. A method called Symbolic Matching with Confidence Evaluation is proposed to perform atuomatic change detection for SAR images. The results of the preliminary experiments have been very promising and will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of new and improved sensors are evolving from advanced development programs. (Systems such as ESSWACS, (Electronic Solid-State Wide-Angle Camera System), the LOREORS (Long Range Electro/Optic Reconnaissance System), and the 2ND Generation FLIR, IR System. The time has come to combine these and other sensor capabilities into a tactical reconnaissance operation which includes an effective real-time capability. A general approach to real-time reconnaissance is employing several airborne sensors and including both airborne and ground data management devices and procedures. Automatic (digital) data processing information will help minimize the amount of irrelevant data presented to human observers. The human observer represents the final and essential filtering agent required to reduce the information rate to and level suitable for dissemination over data links for rapid (real-time) access to tactical commanders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new type of silicon charge coupled device (CCD) imager which provides nine simultaneous video outputs representing a 3 x 3 pixel block that scans the imaging array has been used to emphasize edges and fine detail in various images. The device can also compensate for nonuniform scene illumination. Experimental results indicate that the device can be used to combine real-time analog image processing with subsequent digital processing to form a powerful image acquisition and processing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlinear masking technique has been developed which characterizes digital images by local measures of the median and the median absolute deviation (MAD). Space-variant enhancement is elicited by modifying the local MAD as calculated over a moving window in the original image. The method is found to be effective in edge enhancement and noise cleaning operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a multi-purpose image-processing system. This system was designed for different applications, for example, medical image processing (thermographic imaging, computer assisted diagnosis, etc...), remote sensing (multispectral analysis and classification, thermal mapping of rivers, etc...) and electron microscope image processing (T.E.M., noise filtering, pattern recognition, geometrical measurements). The system can be connected on-line to any kind of input and output image-peripheral. The peripherals used now are : TV camera, thermographic camera, flying-spot scanner, flying-spot film recorder, mechanical scanner coupled to an optical processor, refreshed B&W and color display, graphic tablet, magnetic tape and disc. The user does not need to have an in-depth knowledge of the whole system, the IMAGE 4 software package takes over the housekeeping functions and permits easy FORTRAN programming, whereas the LATIN interactive program package enables anyone not having computer knowledge to use the system. In the conclusion, a comparison is made with the major image processing systems and software packages published in the literature. The appendix gives illustrations of the previously mentioned applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Tukey median filter is widely used in image processing for applications ranging from noise reduction to dropped line replacement. However, implementation of the median filter on a general-purpose computer tends to be computationally very time-consuming. This paper describes a new median filter implementation suitable for use on the video-rate "pipeline processors" provided by several commercially-available image display systems. The execution speed of the new implementation is faster than the best software implementations, depending on the median filter window size, by up to an order of magnitude. It is also independent of the image dimensions up to a 512x512 pixel size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Programmable Logic Arrays (PLA's) are digital electronic devices capable of performing complex logic functions at very high rates - up to 20 million operations per second. They are available in many configurations including several types that can be field programmed using simple and inexpensive equipment. They are thus ideal devices for implementing several types of video rate image processing algorithms, particularly those algorithms that involve a high degree of adaptability or binary decision making. This paper describes the technology and operation of PLA's and details several representative image processing applications, including an adaptive differential signal compression algorithm, a gradient generator, and an edge continuity detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An interactive teaching program has been developed to allow an operator to define critical measurements in a manufactured part, both at the keyboard and with the joystick, from an image-processing terminal. The digitized input from a television scanner is smoothed and edges are located by routines set up and called from the teaching program. When acceptable results are returned, the parameters are incorporated in an automatic production measurement set referenced to a particular part. New parts can be defined and added to the system with brief training sessions, or repetitive measurements can be made of one type of part for production quality control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple technique has been developed to simultaneously display regions of a CT image which have large differences in CT numbers, such as lung and soft tissue. The CT image is considered to be the sum of two unimodal distributions of CT numbers and the CT numbers associated with one region are mapped into the other using a simple linear transformation. The significance of this technique is that it permits the entire CT image to be visualized with optimum contrast either on the CT display monitor or on a single photograph. Examples of a body section and a head section are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Providing the clinician with an accurate visual representation of digital nuclear medicine data requires 1) interpolation to fill in the intensity field between grid points, 2) correction for grayscale nonlinearities inherent in the display and film, and 3) sufficiently fine graylevel resolution to avoid generating artificial contours. Results from preliminary experiments using a precision computer display/film system have been encouraging, indicating improved image interpretation is frequently possible compared with both conventional analog scintigrams and commonly available computer displays. A wider range of count rate data was visible in the digital images giving better identification of low count rate areas, display artifacts due to regularly spaced data samples were eliminated as were contour artifacts caused by too few graylevels, and the relevant anatomy and or pathology was frequently demonstrated with greater clarity. Clinical examples will be presented which illustrate the benefits to be gained by using these techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object location in computed tomography images is a preliminary step required for automated measurements which may be useful in many diagnostic procedures. Most object location, image processing techniques are either globally based such as histogram segmentation or locally based such as edge detection. The method described in this paper uses both local and global information for object location. The technique has been applied to the location of suspected tumors in CT lung and brain images. Sorting and merging steps are required for eliminating noise regions but all suspected tumor regions have been located. Measurements such as boundary roughness or density statistics may also be made on the objects and used to identify suspicious regions for further study by the radiologists. Algorithms for chain-encoding the object boundaries and locating the vertices on the boundaries is also presented and compared. These methods are useful for shape analysis of the regions. The significance of this technique is that it demonstrates important additional capability which could be added to the software libraries of most CT systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pipeline approach to processing of digitized picture data using multiple mini computers and an array processor is presented. Picture size can be up to 512 by 512 pixels. Implementing of many heuristic algorithms such as edge detection, edge enhancin2', convolution/ correlation of template, peak detection, fast fourier transforms, filtering, summing two pictures, registration, histogram equaiization, thresholding and object counting, is determining and application is made to a nerve fiber counting project. The goals of the medical project are to provide data to be used in determing optimal surgical repair of injured or severed nerve and time scale as well as a percent of recovery of function following surgical repair. A simple operating system is described to invoke specific routines in the needed order in designated processors. Three approaches are discussed to the problem of image enhancement, pattern recognition and display of a picture consisting of multiple scenes or an object which is captured in the form of multiple "slices" through the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past five years, our group at the Arizona Health Sciences Center has been developing a system for photoelectronic radiology. One of the projects in which we are involved is intravenous angiography, which Dr. Paul Capp reported on in Session 2 of Recent and Future Developments in Medical Imaging II. The purpose of this paper is to show some of the procedures of manipulation and measurements that have been developed to obtain better subtracted images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computed radiography (CR) is a recent development in diagnostic radiology which yields digital radiographs. Digital image enhancement of CR images in the form of smoothing the noise and enhancing the edges of anatomic boundaries has been used as a means to aid the physician in extracting clinical information from the radiograph. Details of the smoothing and edge enhancing function are discussed along with potential diagnostic applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problems of intelligent image processing by computer, especially the processing of medical images like computed tomography scans, is examined in light of current image segmentation techniques. It is concluded that part of the problem lies in the lack of knowledge about how to guide low-leveiprocesses from higher level goals. An iterative boundary-finding scheme is presented which may aid in this guidance, and results from using specific criteria in the general framework to locate kidneys in abdominal computed tomography scans are presented and discussed. The problem of complex object localization in images is discussed, and some avenues for further research are indicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The perception of edges on computed tomographic (CT) scans appears easy but in fact is difficult. Such perception is important because it is necessary to make quantitative determinations. Diagnosis of such entities as spinal stenosis (narrowing of the spinal canal with encroachment on spinal cord and nerve roots) hinges upon an accurate knowledge of cross-sectional areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A dual mode facsimile data compression technique, called Combined Symbol Matching (CSM) , is presented. The CSM technique possesses the advantages of symbol recognition and extended run-length coding methods. In operation, a symbol blocking operator isolates valid alphanumeric characters and document symbols. The first symbol encountered is placed in a library, and as each new symbol is detected, it is compared with each entry of the library. If the comparison is within a tolerance, the library identification code is transmitted along with the symbol location coordinates. Otherwise, the new symbol is placed in the library and its binary pattern is transmitted. A scoring system determines which elements of the library are to be replaced by new prototypes once the library is filled. Non-isolated symbols are left behind as a residue, and are coded by a two-dimensional run-length coding method. Simulation results are presented for CCITT standard documents. With text-predominate documents, the CSM compression ratio exceeds that obtained with the best run-length coding techniques by a factor of two or more, and is comparable for graphics-predominate documents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Hybrid Coding technique, the sampled data is divided into blocks of NXM samples. Next, each block is transformed to generate a one-dimensional transform of each line in the block. The transform coefficients are then processed by a block of DPCM encoders which uncorrelate the data in the second dimension and quantize the uncorrelated samples using appropriate quantizers. In this study an adaptive Hybrid Coding technique is proposed based on using a single quantizer (A/D Converter) to quantize the transform coefficients and using a variable-rate algorithm for coding the quantized coefficients. The accuracy of the A/D converter (number of bits per sample) determines the fidelity of the system. The buffer-control algorithm controls the accuracy of the A/D converter for each block resulting in a fixed-rate encoder system. Experimental results have shown a stable buffer condition and reconstructed images with a higher fidelity than nonadaptive Hybrid systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hybrid processing of NTSC color images for achieving bandwidth compression is simulated. The processing involves combination of transform and predictive coding of intraframe video. Walsh-Hadamard transform (WHT) along each row followed by differential pulse code modulation (DPCM) along each column is implemented in (16 x 16) and (32 x 32) blocks. Also 2d-WHT of (4 x 4) blocks together with prediction of adjacent blocks is investigated. Based on the histogram of the difference signal, quantizers are optimized for minimum mean square error between the original and processed images. Variable bit allocation reflecting the variance of the error signal is adopted for maintaining a specified bit rate. The processing schemes are evaluated in terms of both subjective and objective criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discrete data sources arising from practical problems are generally characterized by only partially known and varying statistics. This paper provides the development and analysis of some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources. Specifically, algorithms are developed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms is obtained because most real world problems can be simply transformed into this form by appropriate preprocessing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conditional Replenishment is an interframe video compression method that uses correlation in time to reduce video transmission rates. This method works by detecting and sending only the changing portions of the image and by having the receiver use the video data from the previous frame for the non-changing portion. The amount of compression that can be achieved through this technique depends to a large extent on the rate of change within the image, and can vary from 10 to 1 to less than 2 to 1. An additional 3 to 1 reduction in rate is obtained by the intraframe coding of data blocks using a 2-dimensional variable rate Hadamard transform coder. A further additional 2 to 1 rate reduction is achieved by using motion prediction. Motion prediction works by measuring the relative displacements of a subpicture from one frame to the next. The subpicture can then be transmitted by sending only the value of the 2-dimensional displacement. Computer simulations have demonstrated that data rates of 2 to 4 Mega-bits/second can be achieved while still retaining good fidelity in the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study was conducted which examined the effects of simultaneous spatial and temporal bandwidth compression on observer detection and recognition performance of military targets. Five levels of temporal (frame rate) and four levels of spatial (bits per pixel) were co-varied using a factorially designed experiment. Of special interest was any interaction effect between the two main variables. A total of 48 observers were divided in four groups of 12. Each group was presented a single spatial reduction level at all five temporal reduction levels. Statistical analysis revealed no significant differences in subjects' detection or recognition performance due to changes in the temporal rate at which information was presented. Changes in the spatial levels (resolution) did have a significant effect on both detection and recognition performance. Although significant differences in subject performance were noted due to the interaction of the two main variables, in depth analysis revealed the interaction effect to be anomalous. The single most critical element of bandwidth compression appears to be spatial.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A monolithic digital Hadamard transform device is used to reduce complexity in a real-time video data compression system. The Video Data Processor (VIDAP) system is designed to permit variable frame rates, sampling resolutions, and compression ratios. Versatility is incorporated through a modular design which permits tailoring to meet a wide range of video data link applications. Emphasis has been placed on a low cost, low power design suitable for airborne systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cosine transform coding captures the major features of an image at bit rates as low as 0.5 bits per pixel (BPP). However, because the coding is done in transform space, spatial edge information is lost and the images appear soft even at 3BPP. Spatial techniques such as DPCM with entropy encoding preserve edges but fail, ungracefully, at about 2BPP. In this paper we combine the two. The reconstruction from transform coding is compared with the original and the spatial error signal is quantized and encoded. The results are compared with conventional DPCM and cosine transform encoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The special class of convolutional, or nonblock, coding systems which employ both FIR encoders and FIR decoders, or definite decoders, is proposed for real-time processing of sampled data. Fully definite multi-dimensional systems and partially definite, i.e. definite in some dimensions but not in others, are seen to exist. Since such a coding system is necessarily a multi-channel system, for signals of any dimension, some of the properties of multi-channel systems are considered as well as their applications. It is seen that convolutional coders incorporating definite decoders can do several desirable things, normally associated only with block coders, such as noncausal coding; and, in addition, traditional applications of single channel convolutional coders, such as linear prediction and estimation, are also possible in multi-channel systems. Color television image coding is seen to be a natural application for multi-channel coding because of the inherent separation of luminance and chrominance into separate channels. An important property of definite coding systems affecting the economy of 2 and 3 dimensional processing systems, which are used for bandwidth compression, is that the decoder uses only the compressed data, thereby significantly reducing the memory requirements for storage of data corresponding to previous lines and frames. Examples are presented of definite systems which separate color signals into their components, noncausal coders, doders which reduce the visability of noise bursts, and linear predictors with feedback quantizers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-dimensional transforms of the chrominance components of the NTSC color video signal are studied. The effects of interlace and subcarrier modulation on the spatial frequency spectra are treated in detail. A two-dimensional FFT algorithm is proposed and shown to be more efficient than conventional ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new technique to reduce the effect of quantization in PCM image coding is presented in this paper. The new technique consists of Roberts' pseudonoise technique followed by a noise reduction system. The technique by Roberts effectively transforms the signal dependent quantization noise to a signal independent additive random noise. The noise reduction system that follows reduces the additive random noise. Some examples are given to illustrate the performance of the new quantization noise reduction system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.